modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 18:27:59
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 520
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 18:27:48
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
VitoCorleone72/Olivia | VitoCorleone72 | 2025-01-03T10:06:47Z | 23 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
]
| text-to-image | 2025-01-03T10:06:43Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/00149-1159619818.jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Olivia
---
# Olivia
<Gallery />
## Trigger words
You should use `Olivia` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/VitoCorleone72/Olivia/tree/main) them in the Files & versions tab.
|
mradermacher/ACultriX-7B-GGUF | mradermacher | 2025-01-03T10:04:41Z | 19 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:CultriX/ACultriX-7B",
"base_model:quantized:CultriX/ACultriX-7B",
"endpoints_compatible",
"region:us"
]
| null | 2025-01-03T00:24:38Z | ---
base_model: CultriX/ACultriX-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/CultriX/ACultriX-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ACultriX-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ACultriX-7B-GGUF/resolve/main/ACultriX-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/ACultriX-7B-GGUF/resolve/main/ACultriX-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ACultriX-7B-GGUF/resolve/main/ACultriX-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ACultriX-7B-GGUF/resolve/main/ACultriX-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/ACultriX-7B-GGUF/resolve/main/ACultriX-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/ACultriX-7B-GGUF/resolve/main/ACultriX-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ACultriX-7B-GGUF/resolve/main/ACultriX-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ACultriX-7B-GGUF/resolve/main/ACultriX-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/ACultriX-7B-GGUF/resolve/main/ACultriX-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/ACultriX-7B-GGUF/resolve/main/ACultriX-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ACultriX-7B-GGUF/resolve/main/ACultriX-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ACultriX-7B-GGUF/resolve/main/ACultriX-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
LeMoussel/FR-categories_multilingual-e5-base | LeMoussel | 2025-01-03T10:04:20Z | 131 | 0 | transformers | [
"transformers",
"tf",
"xlm-roberta",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-01-03T10:03:06Z | ---
library_name: transformers
tags:
- generated_from_keras_callback
model-index:
- name: FR-categories_multilingual-e5-base
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# FR-categories_multilingual-e5-base
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.47.1
- TensorFlow 2.18.0
- Datasets 3.2.0
- Tokenizers 0.21.0
|
csikasote/mms-1b-swagen-combined-20hrs-model | csikasote | 2025-01-03T10:02:21Z | 19 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"swagen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2025-01-03T08:52:54Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- swagen
- mms
- generated_from_trainer
metrics:
- wer
model-index:
- name: mms-1b-swagen-combined-20hrs-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-swagen-combined-20hrs-model
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the SWAGEN - SWA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2249
- Wer: 0.1913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 2500.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 13.3661 | 0.0596 | 100 | 0.4504 | 0.2830 |
| 0.6536 | 0.1193 | 200 | 0.2726 | 0.2036 |
| 0.4925 | 0.1789 | 300 | 0.2521 | 0.1979 |
| 0.5177 | 0.2385 | 400 | 0.2536 | 0.2035 |
| 0.53 | 0.2982 | 500 | 0.2374 | 0.1964 |
| 0.4791 | 0.3578 | 600 | 0.2359 | 0.1938 |
| 0.4699 | 0.4174 | 700 | 0.2374 | 0.1982 |
| 0.4791 | 0.4770 | 800 | 0.2356 | 0.1954 |
| 0.4269 | 0.5367 | 900 | 0.2317 | 0.1951 |
| 0.4646 | 0.5963 | 1000 | 0.2311 | 0.1958 |
| 0.4492 | 0.6559 | 1100 | 0.2326 | 0.1954 |
| 0.4438 | 0.7156 | 1200 | 0.2309 | 0.1924 |
| 0.4551 | 0.7752 | 1300 | 0.2329 | 0.1951 |
| 0.4828 | 0.8348 | 1400 | 0.2290 | 0.1895 |
| 0.4502 | 0.8945 | 1500 | 0.2273 | 0.1915 |
| 0.4818 | 0.9541 | 1600 | 0.2249 | 0.1913 |
| 0.4286 | 1.0137 | 1700 | 0.2280 | 0.1918 |
| 0.42 | 1.0733 | 1800 | 0.2303 | 0.1939 |
| 0.4584 | 1.1330 | 1900 | 0.2288 | 0.1925 |
| 0.4255 | 1.1926 | 2000 | 0.2249 | 0.1924 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
lesso11/fde22031-44ea-43f8-ac1d-6bebbfde5d49 | lesso11 | 2025-01-03T10:01:19Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:llamafactory/tiny-random-Llama-3",
"base_model:adapter:llamafactory/tiny-random-Llama-3",
"license:apache-2.0",
"region:us"
]
| null | 2025-01-03T09:58:35Z | ---
library_name: peft
license: apache-2.0
base_model: llamafactory/tiny-random-Llama-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fde22031-44ea-43f8-ac1d-6bebbfde5d49
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: llamafactory/tiny-random-Llama-3
bf16: true
chat_template: llama3
datasets:
- data_files:
- 7cab71aee4d2d374_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7cab71aee4d2d374_train_data.json
type:
field_instruction: query
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: false
hub_model_id: lesso11/fde22031-44ea-43f8-ac1d-6bebbfde5d49
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 77GiB
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/7cab71aee4d2d374_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 25
save_strategy: steps
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fde22031-44ea-43f8-ac1d-6bebbfde5d49
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fde22031-44ea-43f8-ac1d-6bebbfde5d49
warmup_steps: 10
weight_decay: 0.01
xformers_attention: false
```
</details><br>
# fde22031-44ea-43f8-ac1d-6bebbfde5d49
This model is a fine-tuned version of [llamafactory/tiny-random-Llama-3](https://huggingface.co/llamafactory/tiny-random-Llama-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.7545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.7674 | 0.0017 | 1 | 11.7603 |
| 11.7578 | 0.0150 | 9 | 11.7600 |
| 11.7634 | 0.0299 | 18 | 11.7593 |
| 11.7611 | 0.0449 | 27 | 11.7585 |
| 11.7557 | 0.0599 | 36 | 11.7577 |
| 11.7613 | 0.0748 | 45 | 11.7569 |
| 11.7463 | 0.0898 | 54 | 11.7561 |
| 11.7585 | 0.1047 | 63 | 11.7554 |
| 11.7566 | 0.1197 | 72 | 11.7549 |
| 11.7592 | 0.1347 | 81 | 11.7547 |
| 11.7514 | 0.1496 | 90 | 11.7546 |
| 11.7602 | 0.1646 | 99 | 11.7545 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Calyx_7B-i1-GGUF | mradermacher | 2025-01-03T10:00:05Z | 32 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"sft",
"fine-tune",
"roleplay",
"en",
"dataset:Himitsui/Lewd-Assistant-v1",
"dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED",
"base_model:rmdhirr/Calyx_7B",
"base_model:quantized:rmdhirr/Calyx_7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
]
| null | 2025-01-03T08:59:06Z | ---
base_model: rmdhirr/Calyx_7B
datasets:
- Himitsui/Lewd-Assistant-v1
- athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
- fine-tune
- roleplay
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/rmdhirr/Calyx_7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Calyx_7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Calyx_7B-i1-GGUF/resolve/main/Calyx_7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Calyx_7B-i1-GGUF/resolve/main/Calyx_7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Calyx_7B-i1-GGUF/resolve/main/Calyx_7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Calyx_7B-i1-GGUF/resolve/main/Calyx_7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Calyx_7B-i1-GGUF/resolve/main/Calyx_7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Calyx_7B-i1-GGUF/resolve/main/Calyx_7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Calyx_7B-i1-GGUF/resolve/main/Calyx_7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Calyx_7B-i1-GGUF/resolve/main/Calyx_7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Calyx_7B-i1-GGUF/resolve/main/Calyx_7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Calyx_7B-i1-GGUF/resolve/main/Calyx_7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Calyx_7B-i1-GGUF/resolve/main/Calyx_7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Calyx_7B-i1-GGUF/resolve/main/Calyx_7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Calyx_7B-i1-GGUF/resolve/main/Calyx_7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Calyx_7B-i1-GGUF/resolve/main/Calyx_7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Calyx_7B-i1-GGUF/resolve/main/Calyx_7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Calyx_7B-i1-GGUF/resolve/main/Calyx_7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Calyx_7B-i1-GGUF/resolve/main/Calyx_7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Calyx_7B-i1-GGUF/resolve/main/Calyx_7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Calyx_7B-i1-GGUF/resolve/main/Calyx_7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Calyx_7B-i1-GGUF/resolve/main/Calyx_7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Calyx_7B-i1-GGUF/resolve/main/Calyx_7B.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Calyx_7B-i1-GGUF/resolve/main/Calyx_7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Calyx_7B-i1-GGUF/resolve/main/Calyx_7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Calyx_7B-i1-GGUF/resolve/main/Calyx_7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Qwen2.5-14B-Kestrel-v0-GGUF | mradermacher | 2025-01-03T09:57:58Z | 80 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Hasnonname/Qwen2.5-14B-Kestrel-v0",
"base_model:quantized:Hasnonname/Qwen2.5-14B-Kestrel-v0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-03T08:03:26Z | ---
base_model: Hasnonname/Qwen2.5-14B-Kestrel-v0
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Hasnonname/Qwen2.5-14B-Kestrel-v0
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-14B-Kestrel-v0-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Kestrel-v0-GGUF/resolve/main/Qwen2.5-14B-Kestrel-v0.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Kestrel-v0-GGUF/resolve/main/Qwen2.5-14B-Kestrel-v0.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Kestrel-v0-GGUF/resolve/main/Qwen2.5-14B-Kestrel-v0.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Kestrel-v0-GGUF/resolve/main/Qwen2.5-14B-Kestrel-v0.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Kestrel-v0-GGUF/resolve/main/Qwen2.5-14B-Kestrel-v0.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Kestrel-v0-GGUF/resolve/main/Qwen2.5-14B-Kestrel-v0.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Kestrel-v0-GGUF/resolve/main/Qwen2.5-14B-Kestrel-v0.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Kestrel-v0-GGUF/resolve/main/Qwen2.5-14B-Kestrel-v0.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Kestrel-v0-GGUF/resolve/main/Qwen2.5-14B-Kestrel-v0.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Kestrel-v0-GGUF/resolve/main/Qwen2.5-14B-Kestrel-v0.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Kestrel-v0-GGUF/resolve/main/Qwen2.5-14B-Kestrel-v0.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Triangle104/Deep-Throat-3B | Triangle104 | 2025-01-03T09:57:57Z | 79 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"base_model:huihui-ai/Llama-3.2-3B-Instruct-abliterated",
"base_model:merge:huihui-ai/Llama-3.2-3B-Instruct-abliterated",
"base_model:prithivMLmods/Llama-Deepsync-3B",
"base_model:merge:prithivMLmods/Llama-Deepsync-3B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-03T09:33:17Z | ---
base_model:
- prithivMLmods/Llama-Deepsync-3B
- huihui-ai/Llama-3.2-3B-Instruct-abliterated
library_name: transformers
tags:
- mergekit
- merge
license: llama3.2
language:
- en
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
Attempt at creating a model that can complete text generation tasks that require deep reasoning, logical structuring, and problem-solving.
But with some censorship removed.
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [prithivMLmods/Llama-Deepsync-3B](https://huggingface.co/prithivMLmods/Llama-Deepsync-3B)
* [huihui-ai/Llama-3.2-3B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.2-3B-Instruct-abliterated)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: prithivMLmods/Llama-Deepsync-3B
- model: huihui-ai/Llama-3.2-3B-Instruct-abliterated
merge_method: slerp
base_model: prithivMLmods/Llama-Deepsync-3B
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0]
``` |
ehristoforu/qwenfranken2.5-7b-it | ehristoforu | 2025-01-03T09:52:09Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-03T09:49:33Z | ---
base_model:
- Qwen/Qwen2.5-7B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Qwen/Qwen2.5-7B-Instruct
layer_range: [0, 15]
- sources:
- model: Qwen/Qwen2.5-7B-Instruct
layer_range: [15, 28]
merge_method: passthrough
dtype: float16
```
|
VitoCorleone72/Anna | VitoCorleone72 | 2025-01-03T09:48:37Z | 30 | 1 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
]
| text-to-image | 2025-01-03T09:48:28Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/tmpm97w4pqr.jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: anna
---
# Anna
<Gallery />
## Trigger words
You should use `anna` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/VitoCorleone72/Anna/tree/main) them in the Files & versions tab.
|
AbdullahKnn/results_t5base | AbdullahKnn | 2025-01-03T09:48:17Z | 170 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2025-01-02T00:25:51Z | ---
library_name: transformers
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: results_t5base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_t5base
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2853
- Rouge1: 0.1769
- Rouge2: 0.0613
- Rougel: 0.1403
- Rougelsum: 0.1403
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.45 | 0.24 | 3000 | 2.4080 | 0.171 | 0.0573 | 0.1357 | 0.1357 | 19.0 |
| 2.5438 | 0.48 | 6000 | 2.3472 | 0.1756 | 0.0597 | 0.1389 | 0.1389 | 19.0 |
| 2.3614 | 0.72 | 9000 | 2.3018 | 0.1773 | 0.0615 | 0.1407 | 0.1407 | 19.0 |
| 2.3553 | 0.96 | 12000 | 2.2853 | 0.1769 | 0.0613 | 0.1403 | 0.1403 | 19.0 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
RyanYr/reflect_mini8B_Om2SftT2_Om2G8kOm2Ag40kIpsdpIter1T02_b0.1 | RyanYr | 2025-01-03T09:46:06Z | 1,632 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:RyanYr/reflect_ministral8Bit_om2_sft-t2_lr.5-6",
"base_model:finetune:RyanYr/reflect_ministral8Bit_om2_sft-t2_lr.5-6",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-03T06:04:47Z | ---
base_model: RyanYr/reflect_ministral8Bit_om2_sft-t2_lr.5-6
library_name: transformers
model_name: reflect_mini8B_Om2SftT2_Om2G8kOm2Ag40kIpsdpIter1T02_b0.1
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for reflect_mini8B_Om2SftT2_Om2G8kOm2Ag40kIpsdpIter1T02_b0.1
This model is a fine-tuned version of [RyanYr/reflect_ministral8Bit_om2_sft-t2_lr.5-6](https://huggingface.co/RyanYr/reflect_ministral8Bit_om2_sft-t2_lr.5-6).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/reflect_mini8B_Om2SftT2_Om2G8kOm2Ag40kIpsdpIter1T02_b0.1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/wgyq2m4c)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
fawzanaramam/Whisper-Small-Finetuned-on-Surah-Fatiha | fawzanaramam | 2025-01-03T09:41:15Z | 31 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"fine-tuned",
"Quran",
"arabic",
"ar",
"dataset:fawzanaramam/the-truth-1st-chapter",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-06-02T03:44:24Z | ---
language:
- ar
license: apache-2.0
base_model: openai/whisper-small
tags:
- fine-tuned
- Quran
- automatic-speech-recognition
- arabic
- whisper
datasets:
- fawzanaramam/the-truth-1st-chapter
metrics:
- wer
model-index:
- name: Whisper Small Finetuned on Surah Fatiha
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: The Truth 2.0 - Surah Fatiha
type: fawzanaramam/the-truth-1st-chapter
args: 'config: ar, split: train'
metrics:
- name: Word Error Rate (WER)
type: wer
value: 0.0
---
# Whisper Small Finetuned on Surah Fatiha
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small), transcribing Surah Fatiha, the first chapter of the Quran. It has been trained using *The Truth 2.0 - Surah Fatiha* dataset and achieves excellent results with a Word Error Rate (WER) of **0.0**, indicating perfect transcription on the evaluation set.
## Model Description
Whisper Small is a transformer-based automatic speech recognition (ASR) model developed by OpenAI. By fine-tuning it on the *Surah Fatiha* dataset, this model becomes highly accurate in transcribing Quranic recitation. It is designed to assist in religious, educational, and research-oriented tasks that require precise Quranic transcription.
## Performance Metrics
On the evaluation set, the model achieved:
- **Loss**: 0.0088
- **Word Error Rate (WER)**: 0.0
These metrics showcase the model's exceptional performance and reliability in transcribing Surah Fatiha audio.
## Training Results
The following table summarizes the training process and results:
| **Training Loss** | **Epoch** | **Step** | **Validation Loss** | **WER** |
|:------------------:|:---------:|:--------:|:-------------------:|:----------:|
| No log | 0.5556 | 10 | 1.1057 | 96.2766 |
| No log | 1.1111 | 20 | 0.3582 | 29.7872 |
| 0.6771 | 1.6667 | 30 | 0.1882 | 23.4043 |
| 0.6771 | 2.2222 | 40 | 0.0928 | 25.0 |
| 0.0289 | 2.7778 | 50 | 0.0660 | 34.0426 |
| 0.0289 | 3.3333 | 60 | 0.0484 | 32.9787 |
| 0.0289 | 3.8889 | 70 | 0.0241 | 25.5319 |
| 0.0056 | 4.4444 | 80 | 0.0184 | 28.7234 |
| 0.0056 | 5.0 | 90 | 0.0111 | 0.0 |
| 0.0019 | 5.5556 | 100 | 0.0088 | 0.0 |
## Intended Uses & Limitations
### Intended Uses
- **Speech-to-text transcription** of Quranic recitation for Surah Fatiha.
- Educational tools to assist in learning and practicing Quranic recitation.
- Research and analysis of Quranic audio transcription methods.
### Limitations
- This model is fine-tuned specifically for Surah Fatiha and may not generalize well to other chapters or non-Quranic Arabic audio.
- Variability in audio quality, accents, or recitation styles might affect performance.
- Optimal performance is achieved with high-quality audio inputs.
## Training and Evaluation Data
The model was trained on *The Truth 2.0 - Surah Fatiha* dataset, which comprises high-quality audio recordings of Surah Fatiha and their corresponding transcripts. The dataset was meticulously curated to ensure the accuracy and authenticity of Quranic content.
## Training Procedure
### Training Hyperparameters
The following hyperparameters were used during training:
- **Learning Rate**: 1e-05
- **Training Batch Size**: 16
- **Evaluation Batch Size**: 8
- **Seed**: 42
- **Optimizer**: Adam (betas=(0.9, 0.999), epsilon=1e-08)
- **Learning Rate Scheduler**: Linear
- **Warmup Steps**: 10
- **Training Steps**: 100
- **Mixed Precision Training**: Native AMP
### Framework Versions
- **Transformers**: 4.41.1
- **PyTorch**: 2.2.1+cu121
- **Datasets**: 2.19.1
- **Tokenizers**: 0.19.1 |
ram9801/distilgpt2-finetuned-wikitext2 | ram9801 | 2025-01-03T09:37:28Z | 217 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-02T12:48:45Z | ---
library_name: transformers
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7487 | 1.0 | 2334 | 3.6663 |
| 3.648 | 2.0 | 4668 | 3.6462 |
| 3.6015 | 3.0 | 7002 | 3.6425 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
mradermacher/titulm-mpt-1b-v1.0-i1-GGUF | mradermacher | 2025-01-03T09:32:55Z | 58 | 0 | transformers | [
"transformers",
"gguf",
"bn",
"dataset:uonlp/CulturaX",
"dataset:wikipedia",
"base_model:hishab/titulm-mpt-1b-v1.0",
"base_model:quantized:hishab/titulm-mpt-1b-v1.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
]
| null | 2025-01-03T08:54:19Z | ---
base_model: hishab/titulm-mpt-1b-v1.0
datasets:
- uonlp/CulturaX
- wikipedia
language:
- bn
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/hishab/titulm-mpt-1b-v1.0
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-i1-GGUF/resolve/main/titulm-mpt-1b-v1.0.i1-IQ1_S.gguf) | i1-IQ1_S | 0.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-i1-GGUF/resolve/main/titulm-mpt-1b-v1.0.i1-IQ1_M.gguf) | i1-IQ1_M | 0.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-i1-GGUF/resolve/main/titulm-mpt-1b-v1.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-i1-GGUF/resolve/main/titulm-mpt-1b-v1.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-i1-GGUF/resolve/main/titulm-mpt-1b-v1.0.i1-IQ2_S.gguf) | i1-IQ2_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-i1-GGUF/resolve/main/titulm-mpt-1b-v1.0.i1-IQ2_M.gguf) | i1-IQ2_M | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-i1-GGUF/resolve/main/titulm-mpt-1b-v1.0.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-i1-GGUF/resolve/main/titulm-mpt-1b-v1.0.i1-Q2_K.gguf) | i1-Q2_K | 0.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-i1-GGUF/resolve/main/titulm-mpt-1b-v1.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-i1-GGUF/resolve/main/titulm-mpt-1b-v1.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-i1-GGUF/resolve/main/titulm-mpt-1b-v1.0.i1-IQ3_S.gguf) | i1-IQ3_S | 0.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-i1-GGUF/resolve/main/titulm-mpt-1b-v1.0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-i1-GGUF/resolve/main/titulm-mpt-1b-v1.0.i1-IQ3_M.gguf) | i1-IQ3_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-i1-GGUF/resolve/main/titulm-mpt-1b-v1.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-i1-GGUF/resolve/main/titulm-mpt-1b-v1.0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-i1-GGUF/resolve/main/titulm-mpt-1b-v1.0.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-i1-GGUF/resolve/main/titulm-mpt-1b-v1.0.i1-Q4_0.gguf) | i1-Q4_0 | 0.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-i1-GGUF/resolve/main/titulm-mpt-1b-v1.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-i1-GGUF/resolve/main/titulm-mpt-1b-v1.0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-i1-GGUF/resolve/main/titulm-mpt-1b-v1.0.i1-Q4_1.gguf) | i1-Q4_1 | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-i1-GGUF/resolve/main/titulm-mpt-1b-v1.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-i1-GGUF/resolve/main/titulm-mpt-1b-v1.0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-i1-GGUF/resolve/main/titulm-mpt-1b-v1.0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-i1-GGUF/resolve/main/titulm-mpt-1b-v1.0.i1-Q6_K.gguf) | i1-Q6_K | 1.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Nitral-Archive/Nera_Noctis-r64-test_train-12B | Nitral-Archive | 2025-01-03T09:31:11Z | 8 | 0 | null | [
"safetensors",
"mistral",
"en",
"license:other",
"region:us"
]
| null | 2025-01-02T13:43:02Z | ---
license: other
language:
- en
---
# Experimental test train, ymmv

## "Sometimes, the brightest gems are found in the darkest places. For it is in the shadows where we learn to really see the light."
# Prompt format: ChatML
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
# Instruct/Context import + Textgen preset combined available: [Presets Here](https://huggingface.co/Nitral-AI/Nera_Noctis-12B/tree/main/SillyTavern_Presets)
# ST Example:

|
mergekit-community/mergekit-dare_ties-woeufhp | mergekit-community | 2025-01-03T09:28:53Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:nvidia/Llama-3.1-Nemotron-70B-Instruct-HF",
"base_model:merge:nvidia/Llama-3.1-Nemotron-70B-Instruct-HF",
"base_model:unsloth/Llama-3.3-70B-Instruct",
"base_model:merge:unsloth/Llama-3.3-70B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-03T08:56:30Z | ---
base_model:
- nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
- unsloth/Llama-3.3-70B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [nvidia/Llama-3.1-Nemotron-70B-Instruct-HF](https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct-HF) as a base.
### Models Merged
The following models were included in the merge:
* [unsloth/Llama-3.3-70B-Instruct](https://huggingface.co/unsloth/Llama-3.3-70B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: unsloth/Llama-3.3-70B-Instruct
parameters:
density: 0.30
weight: 0.50
- model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
parameters:
density: 0.50
weight: 0.75
merge_method: dare_ties
base_model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
fawzanaramam/the-truth-amma-juz | fawzanaramam | 2025-01-03T09:28:22Z | 19 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"fine-tuned",
"Quran",
"arabic",
"ar",
"dataset:fawzanaramam/the-amma-juz",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-06-13T04:42:50Z | ---
language:
- ar
license: apache-2.0
base_model: openai/whisper-small
tags:
- fine-tuned
- Quran
- automatic-speech-recognition
- arabic
- whisper
datasets:
- fawzanaramam/the-amma-juz
model-index:
- name: Whisper small Finetuned on Amma Juz of Quran
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
name: The Amma Juz Dataset
type: fawzanaramam/the-amma-juz
metrics:
- type: eval_loss
value: 0.0058
- type: eval_wer
value: 1.1494
---
# Whisper Small Finetuned on Amma Juz of Quran
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small), specialized in transcribing Arabic audio with a focus on Quranic recitation from the *Amma Juz* dataset. This fine-tuning makes the model highly effective for tasks involving accurate recognition of Arabic speech, especially in religious and Quranic contexts.
## Model Description
Whisper Small is a transformer-based model for automatic speech recognition (ASR), developed by OpenAI. By fine-tuning it on the *Amma Juz* dataset, this version achieves state-of-the-art results on transcribing Quranic recitations with minimal word error rates and high accuracy. The fine-tuned model retains the original capabilities of the Whisper architecture while being optimized for Arabic Quranic text.
## Performance Metrics
On the evaluation set, the model achieved:
- **Evaluation Loss**: 0.0058
- **Word Error Rate (WER)**: 1.1494%
- **Evaluation Runtime**: 44.2766 seconds
- **Evaluation Samples per Second**: 2.259
- **Evaluation Steps per Second**: 0.294
These metrics demonstrate the model's efficiency and accuracy when processing Quranic recitations.
## Intended Uses & Limitations
### Intended Uses
- **Speech-to-text transcription** of Arabic Quranic recitation, specifically from the *Amma Juz*.
- Research and educational purposes in the domain of Quranic studies.
- Applications in tools for learning Quranic recitation.
### Limitations
- The model is fine-tuned on Quranic recitation and may not perform as well on non-Quranic Arabic speech or general Arabic conversations.
- Noise in audio inputs, variations in recitation style, or heavy accents might affect accuracy.
- It is recommended to use clean and high-quality audio for optimal performance.
## Training and Evaluation Data
The model was trained using the *Amma Juz* dataset, which comprises Quranic audio data and corresponding transcripts. This dataset was curated to ensure high-quality representation of Quranic recitations.
## Training Procedure
### Training Hyperparameters
The following hyperparameters were used during training:
- **Learning Rate**: 1e-05
- **Training Batch Size**: 16
- **Evaluation Batch Size**: 8
- **Seed**: 42
- **Optimizer**: Adam (betas=(0.9, 0.999), epsilon=1e-08)
- **Learning Rate Scheduler**: Linear
- **Warmup Steps**: 10
- **Number of Epochs**: 3.0
- **Mixed Precision Training**: Native AMP
### Framework Versions
- **Transformers**: 4.41.1
- **PyTorch**: 2.2.1+cu121
- **Datasets**: 2.19.1
- **Tokenizers**: 0.19.1 |
VinserRas/gemma-2b-it-bnb-4bit-erudite-id | VinserRas | 2025-01-03T09:25:34Z | 80 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"id",
"dataset:SweatGuard2/garuda-indonesian",
"base_model:unsloth/gemma-2b-it-bnb-4bit",
"base_model:finetune:unsloth/gemma-2b-it-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-03T09:16:16Z | ---
base_model: unsloth/gemma-2b-it-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
- sft
license: apache-2.0
language:
- en
- id
datasets:
- SweatGuard2/garuda-indonesian
metrics:
- character
---
# Uploaded model
- **Developed by:** VinserRas
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
Shifa1301/banglish-to-bengali-model | Shifa1301 | 2025-01-03T09:22:19Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2025-01-03T09:21:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
anilguleroglu/llama-turkish-100m | anilguleroglu | 2025-01-03T09:21:49Z | 179 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-02T10:31:47Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: llama-turkish-100m
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-turkish-100m
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.47.1
- Pytorch 2.4.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
Nitral-AI/Nera_Noctis-12B | Nitral-AI | 2025-01-03T09:20:50Z | 70 | 11 | null | [
"safetensors",
"mistral",
"en",
"license:other",
"region:us"
]
| null | 2025-01-01T02:01:24Z | ---
license: other
language:
- en
---

## "Sometimes, the brightest gems are found in the darkest places. For it is in the shadows where we learn to really see the light."
## Quants: Thanks to Bartowski!: [GGUF Available Here](https://huggingface.co/bartowski/Nera_Noctis-12B-GGUF) <3 [4bpw-exl2](https://huggingface.co/Nitral-AI/Nera_Noctis-12B-4bpw-exl2)
# Prompt format: ChatML
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
# Instruct/Context import + Textgen preset combined available: [Presets Here](https://huggingface.co/Nitral-AI/Nera_Noctis-12B/tree/main/SillyTavern_Presets)
# ST Example:

|
VitoCorleone72/AB | VitoCorleone72 | 2025-01-03T09:20:44Z | 244 | 1 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
]
| text-to-image | 2025-01-03T09:20:42Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/1343113.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: alex
---
# AB
<Gallery />
## Trigger words
You should use `alex` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/VitoCorleone72/AB/tree/main) them in the Files & versions tab.
|
layonsan/flowertune-llm-google-t5-small | layonsan | 2025-01-03T09:08:08Z | 191 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-11-21T14:23:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Qwen-portuguese-luana-7b-i1-GGUF | mradermacher | 2025-01-03T09:05:58Z | 601 | 0 | transformers | [
"transformers",
"gguf",
"Misral",
"Portuguese",
"7b",
"chat",
"portugues",
"pt",
"dataset:rhaymison/superset",
"base_model:rhaymison/Qwen-portuguese-luana-7b",
"base_model:quantized:rhaymison/Qwen-portuguese-luana-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| null | 2025-01-03T08:12:47Z | ---
base_model: rhaymison/Qwen-portuguese-luana-7b
datasets:
- rhaymison/superset
language:
- pt
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- Misral
- Portuguese
- 7b
- chat
- portugues
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/rhaymison/Qwen-portuguese-luana-7b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-i1-GGUF/resolve/main/Qwen-portuguese-luana-7b.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-i1-GGUF/resolve/main/Qwen-portuguese-luana-7b.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-i1-GGUF/resolve/main/Qwen-portuguese-luana-7b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-i1-GGUF/resolve/main/Qwen-portuguese-luana-7b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-i1-GGUF/resolve/main/Qwen-portuguese-luana-7b.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-i1-GGUF/resolve/main/Qwen-portuguese-luana-7b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.0 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-i1-GGUF/resolve/main/Qwen-portuguese-luana-7b.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-i1-GGUF/resolve/main/Qwen-portuguese-luana-7b.i1-Q2_K.gguf) | i1-Q2_K | 3.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-i1-GGUF/resolve/main/Qwen-portuguese-luana-7b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-i1-GGUF/resolve/main/Qwen-portuguese-luana-7b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-i1-GGUF/resolve/main/Qwen-portuguese-luana-7b.i1-IQ3_S.gguf) | i1-IQ3_S | 3.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-i1-GGUF/resolve/main/Qwen-portuguese-luana-7b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-i1-GGUF/resolve/main/Qwen-portuguese-luana-7b.i1-IQ3_M.gguf) | i1-IQ3_M | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-i1-GGUF/resolve/main/Qwen-portuguese-luana-7b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-i1-GGUF/resolve/main/Qwen-portuguese-luana-7b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-i1-GGUF/resolve/main/Qwen-portuguese-luana-7b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-i1-GGUF/resolve/main/Qwen-portuguese-luana-7b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-i1-GGUF/resolve/main/Qwen-portuguese-luana-7b.i1-Q4_0.gguf) | i1-Q4_0 | 4.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-i1-GGUF/resolve/main/Qwen-portuguese-luana-7b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-i1-GGUF/resolve/main/Qwen-portuguese-luana-7b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-i1-GGUF/resolve/main/Qwen-portuguese-luana-7b.i1-Q4_1.gguf) | i1-Q4_1 | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-i1-GGUF/resolve/main/Qwen-portuguese-luana-7b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-i1-GGUF/resolve/main/Qwen-portuguese-luana-7b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-i1-GGUF/resolve/main/Qwen-portuguese-luana-7b.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/titulm-mpt-1b-v1.0-GGUF | mradermacher | 2025-01-03T09:05:58Z | 26 | 0 | transformers | [
"transformers",
"gguf",
"bn",
"dataset:uonlp/CulturaX",
"dataset:wikipedia",
"base_model:hishab/titulm-mpt-1b-v1.0",
"base_model:quantized:hishab/titulm-mpt-1b-v1.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-01-03T00:06:53Z | ---
base_model: hishab/titulm-mpt-1b-v1.0
datasets:
- uonlp/CulturaX
- wikipedia
language:
- bn
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/hishab/titulm-mpt-1b-v1.0
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-GGUF/resolve/main/titulm-mpt-1b-v1.0.Q2_K.gguf) | Q2_K | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-GGUF/resolve/main/titulm-mpt-1b-v1.0.Q3_K_S.gguf) | Q3_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-GGUF/resolve/main/titulm-mpt-1b-v1.0.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-GGUF/resolve/main/titulm-mpt-1b-v1.0.IQ4_XS.gguf) | IQ4_XS | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-GGUF/resolve/main/titulm-mpt-1b-v1.0.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-GGUF/resolve/main/titulm-mpt-1b-v1.0.Q3_K_L.gguf) | Q3_K_L | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-GGUF/resolve/main/titulm-mpt-1b-v1.0.Q4_K_M.gguf) | Q4_K_M | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-GGUF/resolve/main/titulm-mpt-1b-v1.0.Q5_K_S.gguf) | Q5_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-GGUF/resolve/main/titulm-mpt-1b-v1.0.Q5_K_M.gguf) | Q5_K_M | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-GGUF/resolve/main/titulm-mpt-1b-v1.0.Q6_K.gguf) | Q6_K | 1.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-GGUF/resolve/main/titulm-mpt-1b-v1.0.Q8_0.gguf) | Q8_0 | 1.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/titulm-mpt-1b-v1.0-GGUF/resolve/main/titulm-mpt-1b-v1.0.f16.gguf) | f16 | 2.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Qwen-portuguese-luana-7b-GGUF | mradermacher | 2025-01-03T09:00:40Z | 85 | 0 | transformers | [
"transformers",
"gguf",
"Misral",
"Portuguese",
"7b",
"chat",
"portugues",
"pt",
"dataset:rhaymison/superset",
"base_model:rhaymison/Qwen-portuguese-luana-7b",
"base_model:quantized:rhaymison/Qwen-portuguese-luana-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-03T07:31:41Z | ---
base_model: rhaymison/Qwen-portuguese-luana-7b
datasets:
- rhaymison/superset
language:
- pt
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- Misral
- Portuguese
- 7b
- chat
- portugues
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/rhaymison/Qwen-portuguese-luana-7b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-GGUF/resolve/main/Qwen-portuguese-luana-7b.Q2_K.gguf) | Q2_K | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-GGUF/resolve/main/Qwen-portuguese-luana-7b.Q3_K_S.gguf) | Q3_K_S | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-GGUF/resolve/main/Qwen-portuguese-luana-7b.Q3_K_M.gguf) | Q3_K_M | 4.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-GGUF/resolve/main/Qwen-portuguese-luana-7b.Q3_K_L.gguf) | Q3_K_L | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-GGUF/resolve/main/Qwen-portuguese-luana-7b.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-GGUF/resolve/main/Qwen-portuguese-luana-7b.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-GGUF/resolve/main/Qwen-portuguese-luana-7b.Q4_K_M.gguf) | Q4_K_M | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-GGUF/resolve/main/Qwen-portuguese-luana-7b.Q5_K_S.gguf) | Q5_K_S | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-GGUF/resolve/main/Qwen-portuguese-luana-7b.Q5_K_M.gguf) | Q5_K_M | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-GGUF/resolve/main/Qwen-portuguese-luana-7b.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-GGUF/resolve/main/Qwen-portuguese-luana-7b.Q8_0.gguf) | Q8_0 | 8.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-portuguese-luana-7b-GGUF/resolve/main/Qwen-portuguese-luana-7b.f16.gguf) | f16 | 15.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
QuantFactory/Llama-Deepsync-3B-GGUF | QuantFactory | 2025-01-03T08:57:05Z | 166 | 2 | transformers | [
"transformers",
"gguf",
"Llama",
"Code",
"CoT",
"Math",
"Deepsync",
"3b",
"ollama",
"text-generation",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:prithivMLmods/Codepy-Deepthink-3B",
"base_model:quantized:prithivMLmods/Codepy-Deepthink-3B",
"license:creativeml-openrail-m",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2025-01-03T08:38:50Z |
---
license: creativeml-openrail-m
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
base_model:
- prithivMLmods/Codepy-Deepthink-3B
pipeline_tag: text-generation
library_name: transformers
tags:
- Llama
- Code
- CoT
- Math
- Deepsync
- 3b
- ollama
---
[](https://hf.co/QuantFactory)
# QuantFactory/Llama-Deepsync-3B-GGUF
This is quantized version of [prithivMLmods/Llama-Deepsync-3B](https://huggingface.co/prithivMLmods/Llama-Deepsync-3B) created using llama.cpp
# Original Model Card
<pre align="center">
.___ ___________.
__| _/____ ____ ______ _________.__. ____ ____ \_____ \_ |__
/ __ |/ __ \_/ __ \\____ \/ ___< | |/ \_/ ___\ _(__ <| __ \
/ /_/ \ ___/\ ___/| |_> >___ \ \___ | | \ \___ / \ \_\ \
\____ |\___ >\___ > __/____ >/ ____|___| /\___ > /______ /___ /
\/ \/ \/|__| \/ \/ \/ \/ \/ \/
</pre>
The **Llama-Deepsync-3B** is a fine-tuned version of the **Llama-3.2-3B-Instruct** base model, designed for text generation tasks that require deep reasoning, logical structuring, and problem-solving. This model leverages its optimized architecture to provide accurate and contextually relevant outputs for complex queries, making it ideal for applications in education, programming, and creative writing.
With its robust natural language processing capabilities, **Llama-Deepsync-3B** excels in generating step-by-step solutions, creative content, and logical analyses. Its architecture integrates advanced understanding of both structured and unstructured data, ensuring precise text generation aligned with user inputs.
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
# **Model Architecture**
Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
# **Use with transformers**
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import torch
from transformers import pipeline
model_id = "prithivMLmods/Llama-Deepsync-3B"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
# **Run with Ollama [Ollama Run]**
Ollama makes running machine learning models simple and efficient. Follow these steps to set up and run your GGUF models quickly.
## Quick Start: Step-by-Step Guide
| Step | Description | Command / Instructions |
|------|-------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | **Install Ollama 🦙** | Download Ollama from [https://ollama.com/download](https://ollama.com/download) and install it on your system. |
| 2 | **Create Your Model File** | - Create a file named after your model, e.g., `metallama`. |
| | | - Add the following line to specify the base model: |
| | | ```bash |
| | | FROM Llama-3.2-1B.F16.gguf |
| | | ``` |
| | | - Ensure the base model file is in the same directory. |
| 3 | **Create and Patch the Model** | Run the following commands to create and verify your model: |
| | | ```bash |
| | | ollama create metallama -f ./metallama |
| | | ollama list |
| | | ``` |
| 4 | **Run the Model** | Use the following command to start your model: |
| | | ```bash |
| | | ollama run metallama |
| | | ``` |
| 5 | **Interact with the Model** | Once the model is running, interact with it: |
| | | ```plaintext |
| | | >>> Tell me about Space X. |
| | | Space X, the private aerospace company founded by Elon Musk, is revolutionizing space exploration... |
| | | ``` |
## Conclusion
With Ollama, running and interacting with models is seamless. Start experimenting today!
|
Shashwath01/Idefic_medical_VQA_merged_4bit | Shashwath01 | 2025-01-03T08:56:46Z | 87 | 5 | transformers | [
"transformers",
"safetensors",
"idefics",
"image-text-to-text",
"Medical Visual Question Answering",
"VQA",
"IDEFIC",
"9B",
"4 Bit",
"LORA",
"Combining base with Adapter models",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| image-text-to-text | 2024-02-24T12:18:00Z | ---
library_name: transformers
tags:
- Medical Visual Question Answering
- VQA
- IDEFIC
- 9B
- 4 Bit
- LORA
- Combining base with Adapter models
license: apache-2.0
---
# Contributed by:
- Shashwath P
- Shashank Ashok
- Akilan Yohendiran
# Total downloads all time - 2106
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
The following model is an experimental fine tuned model of the
IDEFIC 9B version, for medical Visual Question Answering.
It uses a dataset combined from SLAKE and VQARAD.
Check the following repository for the notebooks of training,merging and inference.
https://github.com/Shashwathp/Idefic_medical_vqa
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [@Shashwath01,@Akill19,@Shashank91097 ]
- **Model type:** [Multimodal, Visual Question Answering]
- **Language(s) (NLP):** [English]
- **License:** [Apache - 2.0]
- **Finetuned from model [optional]:** [IDEFIC 9B]
### Dataset
https://huggingface.co/datasets/Shashwath01/VQARAD_SLAKE
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/Shashwathp/Idefic_medical_vqa
<!--- **Paper :** https://ieeexplore.ieee.org/document/10616779-->
## How to Get Started with the Model
Check the below link to get started with inferencing.
https://github.com/Shashwathp/Idefic_medical_vqa/blob/main/inference.ipynb
<!--## Citation
If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section.
[1] S. Punneshetty, S. Ashok, M. Niranjanamurthy, and S. V. N. Murthy, "Fine Tuning Idefic 9b With LORA for Multimodal Medical VQA," in *Proceedings of the 2024 International Conference on Knowledge Engineering and Communication Systems (ICKECS)*, India, Apr. 2024, pp. 1-8. DOI: 10.1109/ICKECS61492.2024.10616779.-->
|
mradermacher/bellman-7b-mistral-instruct-v0.2-GGUF | mradermacher | 2025-01-03T08:54:22Z | 39 | 1 | transformers | [
"transformers",
"gguf",
"sv",
"dataset:neph1/bellman-7b-finetune",
"dataset:neph1/truthy-dpo-v0.1-swe",
"base_model:neph1/bellman-7b-mistral-instruct-v0.2",
"base_model:quantized:neph1/bellman-7b-mistral-instruct-v0.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-02T23:50:43Z | ---
base_model: neph1/bellman-7b-mistral-instruct-v0.2
datasets:
- neph1/bellman-7b-finetune
- neph1/truthy-dpo-v0.1-swe
language:
- sv
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/neph1/bellman-7b-mistral-instruct-v0.2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/bellman-7b-mistral-instruct-v0.2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/bellman-7b-mistral-instruct-v0.2-GGUF/resolve/main/bellman-7b-mistral-instruct-v0.2.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/bellman-7b-mistral-instruct-v0.2-GGUF/resolve/main/bellman-7b-mistral-instruct-v0.2.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/bellman-7b-mistral-instruct-v0.2-GGUF/resolve/main/bellman-7b-mistral-instruct-v0.2.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/bellman-7b-mistral-instruct-v0.2-GGUF/resolve/main/bellman-7b-mistral-instruct-v0.2.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/bellman-7b-mistral-instruct-v0.2-GGUF/resolve/main/bellman-7b-mistral-instruct-v0.2.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/bellman-7b-mistral-instruct-v0.2-GGUF/resolve/main/bellman-7b-mistral-instruct-v0.2.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/bellman-7b-mistral-instruct-v0.2-GGUF/resolve/main/bellman-7b-mistral-instruct-v0.2.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/bellman-7b-mistral-instruct-v0.2-GGUF/resolve/main/bellman-7b-mistral-instruct-v0.2.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/bellman-7b-mistral-instruct-v0.2-GGUF/resolve/main/bellman-7b-mistral-instruct-v0.2.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/bellman-7b-mistral-instruct-v0.2-GGUF/resolve/main/bellman-7b-mistral-instruct-v0.2.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/bellman-7b-mistral-instruct-v0.2-GGUF/resolve/main/bellman-7b-mistral-instruct-v0.2.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/bellman-7b-mistral-instruct-v0.2-GGUF/resolve/main/bellman-7b-mistral-instruct-v0.2.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ziippy/code-llama3-8B-text-to-sql-ver0.1 | ziippy | 2025-01-03T08:47:11Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-01-03T08:43:34Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
VitoCorleone72/Plaza | VitoCorleone72 | 2025-01-03T08:46:16Z | 118 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
]
| text-to-image | 2025-01-03T08:46:15Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/00159-3381242384.jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: aubrey
---
# Plaza
<Gallery />
## Trigger words
You should use `aubrey` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/VitoCorleone72/Plaza/tree/main) them in the Files & versions tab.
|
csikasote/mms-1b-swagen-combined-15hrs-model | csikasote | 2025-01-03T08:45:28Z | 20 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"swagen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2025-01-03T07:58:25Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- swagen
- mms
- generated_from_trainer
metrics:
- wer
model-index:
- name: mms-1b-swagen-combined-15hrs-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-swagen-combined-15hrs-model
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the SWAGEN - SWA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2307
- Wer: 0.1929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 14.8801 | 0.0797 | 100 | 0.7377 | 0.4426 |
| 0.6766 | 0.1594 | 200 | 0.2688 | 0.2006 |
| 0.5153 | 0.2391 | 300 | 0.2484 | 0.1975 |
| 0.526 | 0.3189 | 400 | 0.2398 | 0.1949 |
| 0.4874 | 0.3986 | 500 | 0.2398 | 0.1958 |
| 0.4666 | 0.4783 | 600 | 0.2358 | 0.1909 |
| 0.4406 | 0.5580 | 700 | 0.2391 | 0.1944 |
| 0.4689 | 0.6377 | 800 | 0.2334 | 0.1926 |
| 0.462 | 0.7174 | 900 | 0.2293 | 0.1927 |
| 0.4407 | 0.7971 | 1000 | 0.2293 | 0.1931 |
| 0.4567 | 0.8768 | 1100 | 0.2298 | 0.1928 |
| 0.4711 | 0.9566 | 1200 | 0.2305 | 0.1972 |
| 0.4444 | 1.0359 | 1300 | 0.2307 | 0.1929 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
matrixportal/L3-Luna-8B-Q4_K_S-GGUF | matrixportal | 2025-01-03T08:43:53Z | 16 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Casual-Autopsy/L3-Luna-8B",
"base_model:quantized:Casual-Autopsy/L3-Luna-8B",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-03T08:43:31Z | ---
base_model: Casual-Autopsy/L3-Luna-8B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# matrixportal/L3-Luna-8B-Q4_K_S-GGUF
This model was converted to GGUF format from [`Casual-Autopsy/L3-Luna-8B`](https://huggingface.co/Casual-Autopsy/L3-Luna-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Casual-Autopsy/L3-Luna-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo matrixportal/L3-Luna-8B-Q4_K_S-GGUF --hf-file l3-luna-8b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo matrixportal/L3-Luna-8B-Q4_K_S-GGUF --hf-file l3-luna-8b-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo matrixportal/L3-Luna-8B-Q4_K_S-GGUF --hf-file l3-luna-8b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo matrixportal/L3-Luna-8B-Q4_K_S-GGUF --hf-file l3-luna-8b-q4_k_s.gguf -c 2048
```
|
nvidia/stt_pt_fastconformer_hybrid_large_pc | nvidia | 2025-01-03T08:43:22Z | 190 | 0 | nemo | [
"nemo",
"FastConformer",
"NeMo",
"Portuguese",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_16_0",
"dataset:facebook/multilingual_librispeech",
"license:cc-by-nc-4.0",
"region:us"
]
| automatic-speech-recognition | 2024-12-24T17:18:03Z | ---
license: cc-by-nc-4.0
language:
- pt
metrics:
- wer
- cer
pipeline_tag: automatic-speech-recognition
library_name: nemo
datasets:
- mozilla-foundation/common_voice_16_0
- facebook/multilingual_librispeech
tags:
- FastConformer
- NeMo
- Portuguese
---
# Model Overview
## Description:
STT PT FastConformer Hybrid Transducer-CTC Large transcribes text in upper and lower case Portuguese alphabet along with spaces, period, comma, question mark. This collection contains the Brazilian Portuguese FastConformer Hybrid (Transducer and CTC) Large model (around 115M parameters) with punctuation and capitalization trained on around 2200h hours of Portuguese speech.
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer) for complete architecture details.
It utilizes a Google SentencePiece [1] tokenizer with a vocabulary size of 128.
This model is ready for non-commercial use.
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecHybridRNNTCTCBPEModel.from_pretrained(model_name="nvidia/stt_pt_fastconformer_hybrid_large_pc")
```
### Transcribing using Python
Having instantiated the model, simply do:
```
asr_model.transcribe([path_to_audio_file])
```
### Transcribing many audio files
Using Transducer mode inference:
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/stt_pt_fastconformer_hybrid_large_pc"
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
Using CTC mode inference:
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/stt_pt_fastconformer_hybrid_large_pc"
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
decoder_type="ctc"
```
### Input
This model accepts 16000 Hz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
FastConformer [1] is an optimized version of the Conformer model with 8x depthwise-separable convolutional downsampling. The model is trained in a multitask setup with joint Transducer and CTC decoder loss. You may find more information on the details of FastConformer here: [Fast-Conformer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer) and about Hybrid Transducer-CTC training here: [Hybrid Transducer-CTC](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#hybrid-transducer-ctc).
## Training
The NeMo toolkit [3] was used for training the models for over several hundred epochs. The model was trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/speech_to_text_finetune.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/asr_finetune/speech_to_text_finetune.yaml).
The tokenizers for this model was built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
The model was initialized with the weights of [Spanish FastConformer Hybrid (Transducer and CTC) Large P&C model](https://huggingface.co/nvidia/stt_es_fastconformer_hybrid_large_pc) and fine-tuned to Portuguese using the labeled and unlabeled data(with pseudo-labels).
The MLS dataset was used as unlabeled data as it does not contain punctuation and capitalization.
## Training Dataset:
The model was trained on around 2200 hours of Portuguese speech data.
- [Mozilla Common Voice 16.0 Portuguese](https://commonvoice.mozilla.org/en/datasets) [83h]
- Data Collection Method: by Human
- Labeling Method: by Human
- [Multilingual Librispeech](https://www.openslr.org/94/) [160h]
- Data Collection Method: by Human
- Labeling Method: Pseudo-labels
- Proprietary corpus [2000h]
- Data Collection Method: by Human
- Labeling Method: Pseudo-labels
## Testing Dataset:
**Link:**
1. [Mozilla Common Voice 16(MCV16)](https://commonvoice.mozilla.org/en/datasets) <br>
2. [Multilingual Librispeech](https://www.openslr.org/94/) <br>
## Performance
**Test Hardware:** A5000 GPU
The performance of Automatic Speech Recognition models is measured using Character Error Rate (CER) and Word Error Rate (WER).
The following table summarize the performance of the available model in this collection with the Transducer and CTC decoders.
| Model | MCV %WER/CER test |MLS %WER/CER test |
|-----------|--------------|---------------|
| RNNT head | 12.03 / 3.20 | 24.78 / 5.92 |
| CTC head | 12.83 / 3.39 | 25.7 / 6.18 |
### License/Terms of Use:
The model weights are distributed under a research-friendly non-commercial CC BY-NC 4.0 license
## Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
## References:
[1] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece) <br> |
Edens-Gate/control-qwen-testing | Edens-Gate | 2025-01-03T08:38:23Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-03T08:35:00Z | ---
base_model:
- Qwen/Qwen2.5-7B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# control-qwen
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method using [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) + /home/mango/Misc/outputs/checkpoint-3684 as a base.
### Models Merged
The following models were included in the merge:
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: Qwen/Qwen2.5-7B-Instruct+/home/mango/Misc/outputs/checkpoint-3684
dtype: bfloat16
merge_method: passthrough
models:
- model: Qwen/Qwen2.5-7B-Instruct+/home/mango/Misc/outputs/checkpoint-3684
```
|
studioghAI/1955-renault-4cv | studioghAI | 2025-01-03T08:35:10Z | 14 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-01-03T08:35:02Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: 55r4cv
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# 1955 renault 4cv
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `55r4cv` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
mergekit-community/mergekit-dare_ties-uyuzvch | mergekit-community | 2025-01-03T08:29:32Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:Infermatic/MN-12B-Inferor-v0.1",
"base_model:merge:Infermatic/MN-12B-Inferor-v0.1",
"base_model:TheDrummer/Rocinante-12B-v1.1",
"base_model:merge:TheDrummer/Rocinante-12B-v1.1",
"base_model:unsloth/Mistral-Nemo-Instruct-2407",
"base_model:merge:unsloth/Mistral-Nemo-Instruct-2407",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-03T08:24:12Z | ---
base_model:
- unsloth/Mistral-Nemo-Instruct-2407
- TheDrummer/Rocinante-12B-v1.1
- Infermatic/MN-12B-Inferor-v0.1
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [unsloth/Mistral-Nemo-Instruct-2407](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407) as a base.
### Models Merged
The following models were included in the merge:
* [TheDrummer/Rocinante-12B-v1.1](https://huggingface.co/TheDrummer/Rocinante-12B-v1.1)
* [Infermatic/MN-12B-Inferor-v0.1](https://huggingface.co/Infermatic/MN-12B-Inferor-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: TheDrummer/Rocinante-12B-v1.1
parameters:
density: 0.5
weight: 0.25
- model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
density: 0.5
weight: 0.5
- model: Infermatic/MN-12B-Inferor-v0.1
parameters:
density: 0.5
weight: 0.75
merge_method: dare_ties
base_model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
RioShiina/Llama-3.1-Swallow-70B-v0.1-exl2 | RioShiina | 2025-01-03T08:26:52Z | 10 | 0 | null | [
"ja",
"en",
"arxiv:2407.21783",
"base_model:tokyotech-llm/Llama-3.1-Swallow-70B-v0.1",
"base_model:quantized:tokyotech-llm/Llama-3.1-Swallow-70B-v0.1",
"license:llama3.1",
"region:us"
]
| null | 2024-10-11T07:44:48Z | ---
base_model: tokyotech-llm/Llama-3.1-Swallow-70B-v0.1
base_model_relation: quantized
license: llama3.1
language:
- ja
- en
---
**[2.2bpw](https://huggingface.co/rioshiina/Llama-3.1-Swallow-70B-v0.1-exl2/tree/2.2bpw)** (high quality loss, only for 24GB vRAM test.)
**[4.0bpw](https://huggingface.co/rioshiina/Llama-3.1-Swallow-70B-v0.1-exl2/tree/4.0bpw)**
**[6.0bpw](https://huggingface.co/rioshiina/Llama-3.1-Swallow-70B-v0.1-exl2/tree/6.0bpw)**
**[8.0bpw](https://huggingface.co/rioshiina/Llama-3.1-Swallow-70B-v0.1-exl2/tree/8.0bpw)**
# Llama-3.1-Swallow-70B-v0.1-exl2
- Model creator: [tokyotech-llm](https://huggingface.co/tokyotech-llm)
- Original model: [Llama-3.1-Swallow-70B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-v0.1)
### License
[META LLAMA 3.1 COMMUNITY LICENSE](https://www.llama.com/llama3_1/license/)
### Citations
```tex
@inproceedings{Fujii:COLM2024,
title={Continual Pre-Training for Cross-Lingual LLM Adaptation:
Enhancing Japanese Language Capabilities},
author={Kazuki Fujii and Taishi Nakamura and Mengsay Loem and Hiroki
Iida and Masanari Ohi and Kakeru Hattori and Hirai Shota and Sakae
Mizuki and Rio Yokota and Naoaki Okazaki},
booktitle="Proceedings of the First Conference on Language Modeling",
series={COLM},
pages="(to appear)",
year="2024",
month=oct,
address={University of Pennsylvania, USA},
}
@inproceedings{Okazaki:COLM2024,
title={Building a Large Japanese Web Corpus for Large Language Models},
author={Naoaki Okazaki and Kakeru Hattori and Hirai Shota and Hiroki
Iida and Masanari Ohi and Kazuki Fujii and Taishi Nakamura and Mengsay
Loem and Rio Yokota and Sakae Mizuki},
booktitle="Proceedings of the First Conference on Language Modeling",
series={COLM},
pages="(to appear)",
year="2024",
month=oct,
address={University of Pennsylvania, USA},
}
@misc{dubey2024llama3herdmodels,
title={The Llama 3 Herd of Models},
author={Abhimanyu Dubey and Abhinav Jauhri and Abhinav Pandey and Abhishek Kadian and Ahmad Al-Dahle and Aiesha Letman and Akhil Mathur and Alan Schelten and Amy Yang and Angela Fan et al.},
year={2024},
eprint={2407.21783},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2407.21783},
}
``` |
mradermacher/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B-GGUF | mradermacher | 2025-01-03T08:25:33Z | 22 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:zelk12/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B",
"base_model:quantized:zelk12/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-03T07:50:00Z | ---
base_model: zelk12/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/zelk12/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MU-gemma-2-MTM2MUMTM4-9B.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RioShiina/Llama-3.1-Swallow-70B-Instruct-v0.1-exl2 | RioShiina | 2025-01-03T08:25:31Z | 7 | 0 | null | [
"ja",
"en",
"arxiv:2407.21783",
"base_model:tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.1",
"base_model:quantized:tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.1",
"license:llama3.1",
"region:us"
]
| null | 2024-10-11T07:45:03Z | ---
base_model: tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.1
base_model_relation: quantized
license: llama3.1
language:
- ja
- en
---
**[2.2bpw](https://huggingface.co/rioshiina/Llama-3.1-Swallow-70B-Instruct-v0.1-exl2/tree/2.2bpw)** (high quality loss, only for 24GB vRAM test.)
**[4.0bpw](https://huggingface.co/rioshiina/Llama-3.1-Swallow-70B-Instruct-v0.1-exl2/tree/4.0bpw)**
**[6.0bpw](https://huggingface.co/rioshiina/Llama-3.1-Swallow-70B-Instruct-v0.1-exl2/tree/6.0bpw)**
**[8.0bpw](https://huggingface.co/rioshiina/Llama-3.1-Swallow-70B-Instruct-v0.1-exl2/tree/8.0bpw)**
# Llama-3.1-Swallow-70B-Instruct-v0.1-exl2
- Model creator: [tokyotech-llm](https://huggingface.co/tokyotech-llm)
- Original model: [Llama-3.1-Swallow-70B-Instruct-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.1)
### License
[META LLAMA 3.1 COMMUNITY LICENSE](https://www.llama.com/llama3_1/license/)
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
あなたは誠実で優秀な日本人のアシスタントです。<|eot_id|><|start_header_id|>user<|end_header_id|>
東京の紅葉した公園で、東京タワーと高層ビルを背景に、空を舞うツバメと草地に佇むラマが出会う温かな物語を書いてください。<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
### Citations
```tex
@inproceedings{Fujii:COLM2024,
title={Continual Pre-Training for Cross-Lingual LLM Adaptation:
Enhancing Japanese Language Capabilities},
author={Kazuki Fujii and Taishi Nakamura and Mengsay Loem and Hiroki
Iida and Masanari Ohi and Kakeru Hattori and Hirai Shota and Sakae
Mizuki and Rio Yokota and Naoaki Okazaki},
booktitle="Proceedings of the First Conference on Language Modeling",
series={COLM},
pages="(to appear)",
year="2024",
month=oct,
address={University of Pennsylvania, USA},
}
@inproceedings{Okazaki:COLM2024,
title={Building a Large Japanese Web Corpus for Large Language Models},
author={Naoaki Okazaki and Kakeru Hattori and Hirai Shota and Hiroki
Iida and Masanari Ohi and Kazuki Fujii and Taishi Nakamura and Mengsay
Loem and Rio Yokota and Sakae Mizuki},
booktitle="Proceedings of the First Conference on Language Modeling",
series={COLM},
pages="(to appear)",
year="2024",
month=oct,
address={University of Pennsylvania, USA},
}
@misc{dubey2024llama3herdmodels,
title={The Llama 3 Herd of Models},
author={Abhimanyu Dubey and Abhinav Jauhri and Abhinav Pandey and Abhishek Kadian and Ahmad Al-Dahle and Aiesha Letman and Akhil Mathur and Alan Schelten and Amy Yang and Angela Fan et al.},
year={2024},
eprint={2407.21783},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2407.21783},
}
``` |
jimmylam6666/Mistral-Enmo-RPGPT-E3-Rank216-512-Q8_0-GGUF | jimmylam6666 | 2025-01-03T08:22:12Z | 5 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:roy12715/Mistral-Enmo-RPGPT-E3-Rank216-512",
"base_model:quantized:roy12715/Mistral-Enmo-RPGPT-E3-Rank216-512",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-03T08:21:14Z | ---
base_model: roy12715/Mistral-Enmo-RPGPT-E3-Rank216-512
tags:
- llama-cpp
- gguf-my-repo
---
# jimmylam6666/Mistral-Enmo-RPGPT-E3-Rank216-512-Q8_0-GGUF
This model was converted to GGUF format from [`roy12715/Mistral-Enmo-RPGPT-E3-Rank216-512`](https://huggingface.co/roy12715/Mistral-Enmo-RPGPT-E3-Rank216-512) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/roy12715/Mistral-Enmo-RPGPT-E3-Rank216-512) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo jimmylam6666/Mistral-Enmo-RPGPT-E3-Rank216-512-Q8_0-GGUF --hf-file mistral-enmo-rpgpt-e3-rank216-512-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jimmylam6666/Mistral-Enmo-RPGPT-E3-Rank216-512-Q8_0-GGUF --hf-file mistral-enmo-rpgpt-e3-rank216-512-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo jimmylam6666/Mistral-Enmo-RPGPT-E3-Rank216-512-Q8_0-GGUF --hf-file mistral-enmo-rpgpt-e3-rank216-512-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo jimmylam6666/Mistral-Enmo-RPGPT-E3-Rank216-512-Q8_0-GGUF --hf-file mistral-enmo-rpgpt-e3-rank216-512-q8_0.gguf -c 2048
```
|
RioShiina/Llama-3-Swallow-70B-v0.1-exl2 | RioShiina | 2025-01-03T08:19:55Z | 11 | 0 | null | [
"ja",
"en",
"base_model:tokyotech-llm/Llama-3-Swallow-70B-v0.1",
"base_model:quantized:tokyotech-llm/Llama-3-Swallow-70B-v0.1",
"license:llama3",
"region:us"
]
| null | 2024-10-14T01:56:29Z | ---
base_model: tokyotech-llm/Llama-3-Swallow-70B-v0.1
base_model_relation: quantized
license: llama3
language:
- ja
- en
---
**[2.2bpw](https://huggingface.co/rioshiina/Llama-3-Swallow-70B-v0.1-exl2/tree/2.2bpw)** (high quality loss, only for 24GB vRAM test.)
**[4.0bpw](https://huggingface.co/rioshiina/Llama-3-Swallow-70B-v0.1-exl2/tree/4.0bpw)**
**[6.0bpw](https://huggingface.co/rioshiina/Llama-3-Swallow-70B-v0.1-exl2/tree/6.0bpw)**
**[8.0bpw](https://huggingface.co/rioshiina/Llama-3-Swallow-70B-v0.1-exl2/tree/8.0bpw)**
# Llama-3-Swallow-70B-v0.1-exl2
- Model creator: [tokyotech-llm](https://huggingface.co/tokyotech-llm)
- Original model: [Llama-3-Swallow-70B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-70B-v0.1)
### License
[META LLAMA 3 COMMUNITY LICENSE](https://llama.meta.com/llama3/license/)
### Citations
```tex
@misc{llama3swallow,
title={Llama 3 Swallow},
url={https://swallow-llm.github.io/llama3-swallow.en.html},
author={Swallow LLM},
year={2024},
}
```
```tex
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
``` |
QuantFactory/YuLan-Mini-GGUF | QuantFactory | 2025-01-03T08:19:30Z | 170 | 2 | transformers | [
"transformers",
"gguf",
"code",
"math",
"text-generation",
"en",
"zh",
"dataset:yulan-team/YuLan-Mini-Datasets",
"dataset:HuggingFaceFW/fineweb-edu",
"dataset:bigcode/the-stack-v2",
"dataset:mlfoundations/dclm-baseline-1.0",
"dataset:math-ai/AutoMathText",
"dataset:gair-prox/open-web-math-pro",
"dataset:RUC-AIBOX/long_form_thought_data_5k",
"dataset:internlm/Lean-Workbook",
"dataset:internlm/Lean-Github",
"dataset:deepseek-ai/DeepSeek-Prover-V1",
"dataset:ScalableMath/Lean-STaR-base",
"dataset:ScalableMath/Lean-STaR-plus",
"dataset:ScalableMath/Lean-CoT-base",
"dataset:ScalableMath/Lean-CoT-plus",
"dataset:opencsg/chinese-fineweb-edu",
"dataset:liwu/MNBVC",
"dataset:vikp/textbook_quality_programming",
"dataset:HuggingFaceTB/smollm-corpus",
"dataset:OpenCoder-LLM/opc-annealing-corpus",
"dataset:OpenCoder-LLM/opc-sft-stage1",
"dataset:OpenCoder-LLM/opc-sft-stage2",
"dataset:XinyaoHu/AMPS_mathematica",
"dataset:deepmind/math_dataset",
"dataset:mrfakename/basic-math-10m",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:AI-MO/NuminaMath-CoT",
"dataset:HuggingFaceTB/cosmopedia",
"dataset:MU-NLPC/Calc-ape210k",
"dataset:manu/project_gutenberg",
"dataset:storytracer/LoC-PD-Books",
"dataset:allenai/dolma",
"arxiv:2412.17743",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2025-01-03T08:01:47Z |
---
license: mit
library_name: transformers
pipeline_tag: text-generation
datasets:
- yulan-team/YuLan-Mini-Datasets
- HuggingFaceFW/fineweb-edu
- bigcode/the-stack-v2
- mlfoundations/dclm-baseline-1.0
- math-ai/AutoMathText
- gair-prox/open-web-math-pro
- RUC-AIBOX/long_form_thought_data_5k
- internlm/Lean-Workbook
- internlm/Lean-Github
- deepseek-ai/DeepSeek-Prover-V1
- ScalableMath/Lean-STaR-base
- ScalableMath/Lean-STaR-plus
- ScalableMath/Lean-CoT-base
- ScalableMath/Lean-CoT-plus
- opencsg/chinese-fineweb-edu
- liwu/MNBVC
- vikp/textbook_quality_programming
- HuggingFaceTB/smollm-corpus
- OpenCoder-LLM/opc-annealing-corpus
- OpenCoder-LLM/opc-sft-stage1
- OpenCoder-LLM/opc-sft-stage2
- XinyaoHu/AMPS_mathematica
- deepmind/math_dataset
- mrfakename/basic-math-10m
- microsoft/orca-math-word-problems-200k
- AI-MO/NuminaMath-CoT
- HuggingFaceTB/cosmopedia
- MU-NLPC/Calc-ape210k
- manu/project_gutenberg
- storytracer/LoC-PD-Books
- allenai/dolma
language:
- en
- zh
tags:
- code
- math
arxiv: 2412.17743
model-index:
- name: YuLan-Mini
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.640
verified: false
- task:
type: text-generation
dataset:
type: mbpp
name: MBPP
metrics:
- name: pass@1
type: pass@1
value: 0.659
verified: false
- task:
type: text-generation
dataset:
type: math-500
name: MATH-500
metrics:
- name: maj@1
type: maj@1
value: 0.378
verified: false
- task:
type: text-generation
dataset:
type: gsm8k
name: GSM8K
metrics:
- name: maj@1
type: maj@1
value: 0.684
verified: false
---
[](https://hf.co/QuantFactory)
# QuantFactory/YuLan-Mini-GGUF
This is quantized version of [yulan-team/YuLan-Mini](https://huggingface.co/yulan-team/YuLan-Mini) created using llama.cpp
# Original Model Card
# Important Notice: This is a pre-trained **base model** without instruction-following capabilities. The **SFT version** will be released within a few weeks.
<div align=center>
<img src="assets/YuLan-logo.jpg" width="400px">
<h1>YuLan-Mini: An Open Data-efficient Language Model</h1>
<a href="https://github.com/RUC-GSAI/YuLan-Mini/blob/main/LICENSE"><img src="https://img.shields.io/badge/License-MIT-blue" alt="license"></a>
<a href="https://arxiv.org/abs/2412.17743" target="_blank"><img src=https://img.shields.io/badge/arXiv-b5212f.svg?logo=arxiv></a>
<a href="https://huggingface.co/collections/yulan-team/yulan-mini-676d214b24376739b00d95f3"><img alt="Static Badge" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-blue?color=8A2BE2"></a>
<a href="https://github.com/RUC-GSAI/YuLan-Mini" target="_blank"><img src="https://img.shields.io/github/stars/RUC-GSAI/YuLan-Mini"></a>
</div>
YuLan-Mini is a lightweight language model with 2.4 billion parameters. It achieves performance comparable to industry-leading models trained on significantly more data, despite being pre-trained on only 1.08T tokens. The model excels particularly in the domains of **mathematics** and **code**. To facilitate reproducibility, we will open-source the relevant pre-training resources.
---
## Model Downloads 🔗
> Model weights will be uploaded after final preparations.
| Model | Context Length | SFT |
|---------|----------------|-----|
| [YuLan-Mini](https://huggingface.co/yulan-team/YuLan-Mini) (Recommended) | 28K | ❎ |
| [YuLan-Mini-2.4B-4K](https://huggingface.co/yulan-team/YuLan-Mini-Intermediate-4K) | 4K | ❎ |
| YuLan-Mini-Instruct | Comming soon | ✅ |
---
## Features 🌟
<div align=center>
<img src="assets/main.png">
</div>
Our pre-training methodology improves training efficiency through three key innovations:
1. an elaborately designed **data pipeline** that combines data cleaning with data schedule strategies;
2. a systematic **optimization method** that can effectively mitigate training instability;
3. an effective **annealing approach** that integrate targeted data selection and long context training.
---
## Behchmarks 🌟
| Models | Model Size | # Train Tokens | Context Length | MATH 500 | GSM 8K | Human Eval | MBPP | RACE Middle | RACE High | RULER |
|:----------------|----------:|--------------:|--------------:|:--------|:------|:----------|:------|:-----------|:---------|:------|
| MiniCPM | 2.6B | 1.06T | 4K | 15.00 | 53.83 | 50.00* | 47.31 | 56.61 | 44.27 | N/A |
| Qwen-2 | 1.5B | 7T | 128K | 22.60 | 46.90* | 34.80* | 46.90* | 55.77 | 43.69 | 60.16 |
| Qwen2.5 | 0.5B | 18T | 128K | 23.60 | 41.60* | 30.50* | 39.30* | 52.36 | 40.31 | 49.23 |
| Qwen2.5 | 1.5B | 18T | 128K | **45.40** | **68.50\*** | 37.20* | 60.20* | **58.77** | 44.33 | <ins>68.26</ins> |
| Gemma2 | 2.6B | 2T | 8K | 18.30* | 30.30* | 19.50* | 42.10* | - | - | N/A |
| StableLM2 | 1.7B | 2T | 4K | - | 20.62 | 8.50* | 17.50 | 56.33 | **45.06** | N/A |
| SmolLM2 | 1.7B | 11T | 8K | 11.80 | - | 23.35 | 45.00 | 55.77 | 43.06 | N/A |
| Llama3.2 | 3.2B | 9T | 128K | 7.40 | - | 29.30 | 49.70 | 55.29 | 43.34 | **77.06** |
| YuLan-Mini | 2.4B | 1.04T | 4K | 32.60 | 66.65 | <ins>61.60</ins> | **66.70** | 55.71 | 43.58 | N/A |
| YuLan-Mini | 2.4B | 1.08T | 28K | <ins>37.80</ins> | <ins>68.46</ins> | **64.00** | <ins>65.90</ins>| <ins>57.18</ins> | <ins>44.57</ins> | 51.48 |
| Models | LAMBADA | MMLU | CMMLU | CEval | HellaSwag | WinoGrande | StoryCloze | ARC-e | ARC-c |
|:----------------|:-------|:-----|:-----|:-----|:----------|:-----------|:-----------|:-----|:-----|
| MiniCPM-2.6B | 61.91 | 53.37 | 48.97 | 48.24 | 67.92 | 65.74 | 78.51 | 55.51 | 43.86 |
| Qwen2-1.5B | 64.68 | 55.90 | **70.76** | **71.94** | 66.11 | 66.14 | 77.60 | 62.21 | 42.92 |
| Qwen2.5-0.5B | 52.00 | 47.50 | 52.17 | 54.27 | 50.54 | 55.88 | 71.67 | 56.10 | 39.51 |
| Qwen2.5-1.5B | 62.12 | <ins>60.71</ins> | <ins>67.82</ins> | <ins>69.05</ins> | 67.18 | 64.48 | 76.80 | **71.51** | <ins>53.41</ins> |
| Gemma2-2.6B | - | 52.20*| - | 28.00*| <ins>74.60*</ins> | **71.50\*** | - | - | **55.70\***|
| StableLM2-1.7B | 66.15 | 40.37 | 29.29 | 26.99 | 69.79 | 64.64 | <ins>78.56</ins> | 54.00 | 40.78 |
| SmolLM2-1.7B | <ins>67.42</ins> | 51.91 | 33.46 | 35.10 | 72.96 | 67.40 | **79.32** | 44.82 | 35.49 |
| Llama3.2-3B | **69.08** | **63.40** | 44.44 | 44.49 | **75.62** | <ins>67.48</ins> | 76.80 | <ins>70.12</ins> | 48.81 |
| YuLan-Mini | 64.72 | 51.79 | 48.35 | 51.47 | 68.65 | 67.09 | 76.37 | 69.87 | 50.51 |
| YuLan-Mini | 65.67 | 49.10 | 45.45 | 48.23 | 67.22 | 67.24 | 75.89 | 67.47 | 49.32 |
---
## Pre-Training Resources 🔧
To enhance research transparency and reproducibility, we are open-sourcing relevant [pre-training resources](https://github.com/RUC-GSAI/YuLan-Mini/blob/main/pretrain):
<details><summary>1. Pre-training and Evaluation Code</summary>
The pre-training and evaluation code will be released in a future update.
</details>
<details><summary>2. Intermediate Stage Checkpoints</summary>
The intermediate stage checkpoints are released in <a href="https://huggingface.co/collections/yulan-team/yulan-mini-676d214b24376739b00d95f3">YuLan-Mini</a>.
</details>
<details><summary>3. Optimizer States Before Annealing</summary>
<a href="https://huggingface.co/yulan-team/YuLan-Mini-Before-Annealing">YuLan-Mini-Before-Annealing</a>
</details>
<details><summary>4. The Used Open-Source Datasets </summary>
<a href="https://github.com/RUC-GSAI/YuLan-Mini/blob/main/pretrain/datasets-list.md">Used-Datasets-List</a>
</details>
<details><summary>5. Data Distribution for every phase</summary>
<a href="https://github.com/RUC-GSAI/YuLan-Mini/blob/main/pretrain/final.pdf">
<div align=center>
<img src="assets/data_distribution_for_every_phase.png">
</div>
</a>
</details>
<details><summary>6. Synthetic Data</summary>
Data cleaning and synthesis pipeline:
<div align=center>
<img src="assets/data-pipeline.png">
</div>
The synthetic data we are using is released in <a href="https://huggingface.co/collections/yulan-team/yulan-mini-676d214b24376739b00d95f3">YuLan-Mini-Datasets</a>
</details>
<details><summary>7. Intermediate Optimizer States</summary>
Intermediate optimizer states will be released in a future update.
</details>
### What you can do with these pre-training resources
1. **Pre-train** your own LLM. You can use [our data](https://huggingface.co/yulan-team/YuLan-Mini-Datasets) and curriculum to train a model that's just as powerful as YuLan-Mini.
2. Perform your own **learning rate annealing**. During the annealing phase, YuLan-Mini's learning ability is at its peak. You can resume training from [the checkpoint before annealing](https://huggingface.co/yulan-team/YuLan-Mini-Before-Annealing) and use your own dataset for learning rate annealing.
3. **Fine-tune** the Instruct version of the LLM. You can use the YuLan-Mini base model to train your own Instruct version.
4. **Training dynamics** research. You can use YuLan-Mini's intermediate checkpoints to explore internal changes during the pre-training process.
5. **Synthesize** your own data. You can use YuLan-Mini's [data pipeline](https://github.com/RUC-GSAI/YuLan-Mini) to clean and generate your own dataset.
---
## Quick Start 💻
Below is a simple example for inference using Huggingface:
**Huggingface Inference Example**
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("yulan-team/YuLan-Mini")
model = AutoModelForCausalLM.from_pretrained("yulan-team/YuLan-Mini", torch_dtype=torch.bfloat16)
# Input text
input_text = "Renmin University of China is"
inputs = tokenizer(input_text, return_tensors="pt")
# Completion
output = model.generate(inputs["input_ids"], max_new_tokens=100)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
**vLLM Serve Example**
```bash
vllm serve yulan-team/YuLan-Mini --dtype bfloat16
```
**SGLang Serve Example**
```bash
python -m sglang.launch_server --model-path yulan-team/YuLan-Mini --port 30000 --host 0.0.0.0
```
---
## The Team
YuLan-Mini is developed and maintained by [AI Box, Renmin University of China](http://aibox.ruc.edu.cn/).
## License
- The code in this repository is released under the [MIT License](./LICENSE).
- Policies regarding the use of model weights, intermediate optimizer states, and training data will be announced in future updates.
- Limitations: Despite our efforts to mitigate safety concerns and encourage the generation of ethical and lawful text, the probabilistic nature of language models may still lead to unexpected outputs. For instance, responses might contain bias, discrimination, or other harmful content. Please refrain from disseminating such content. We are not liable for any consequences arising from the spread of harmful information.
## Citation
If you find YuLan-Mini helpful for your research or development, please cite [our technical report](https://arxiv.org/abs/2412.17743):
```
@misc{hu2024yulanmini,
title={YuLan-Mini: An Open Data-efficient Language Model},
author={Yiwen Hu and Huatong Song and Jia Deng and Jiapeng Wang and Jie Chen and Kun Zhou and Yutao Zhu and Jinhao Jiang and Zican Dong and Wayne Xin Zhao and Ji-Rong Wen},
year={2024},
eprint={2412.17743},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.17743},
}
```
|
ericson333/myanton | ericson333 | 2025-01-03T08:18:49Z | 46 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-01-03T07:55:25Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: myanton
---
# Myanton
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `myanton` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ericson333/myanton', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Hemg/EMOTION-AI | Hemg | 2025-01-03T08:13:28Z | 126 | 1 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-01-03T04:25:20Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EMOTION-AI
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EMOTION-AI
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4780
- Accuracy: 0.5616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.9982 | 271 | 1.5711 | 0.5464 |
| 1.5442 | 2.0 | 543 | 1.4952 | 0.5638 |
| 1.5442 | 2.9982 | 814 | 1.4755 | 0.5657 |
| 1.3192 | 3.9926 | 1084 | 1.4780 | 0.5616 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
mradermacher/Qwen2.5-32b-RP-Ink-i1-GGUF | mradermacher | 2025-01-03T08:13:04Z | 480 | 1 | transformers | [
"transformers",
"gguf",
"roleplay",
"conversational",
"en",
"base_model:allura-org/Qwen2.5-32b-RP-Ink",
"base_model:quantized:allura-org/Qwen2.5-32b-RP-Ink",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
]
| null | 2025-01-01T07:46:21Z | ---
base_model: allura-org/Qwen2.5-32b-RP-Ink
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- roleplay
- conversational
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/allura-org/Qwen2.5-32b-RP-Ink
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-32b-RP-Ink-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32b-RP-Ink-i1-GGUF/resolve/main/Qwen2.5-32b-RP-Ink.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32b-RP-Ink-i1-GGUF/resolve/main/Qwen2.5-32b-RP-Ink.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32b-RP-Ink-i1-GGUF/resolve/main/Qwen2.5-32b-RP-Ink.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32b-RP-Ink-i1-GGUF/resolve/main/Qwen2.5-32b-RP-Ink.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32b-RP-Ink-i1-GGUF/resolve/main/Qwen2.5-32b-RP-Ink.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32b-RP-Ink-i1-GGUF/resolve/main/Qwen2.5-32b-RP-Ink.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32b-RP-Ink-i1-GGUF/resolve/main/Qwen2.5-32b-RP-Ink.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32b-RP-Ink-i1-GGUF/resolve/main/Qwen2.5-32b-RP-Ink.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32b-RP-Ink-i1-GGUF/resolve/main/Qwen2.5-32b-RP-Ink.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32b-RP-Ink-i1-GGUF/resolve/main/Qwen2.5-32b-RP-Ink.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32b-RP-Ink-i1-GGUF/resolve/main/Qwen2.5-32b-RP-Ink.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32b-RP-Ink-i1-GGUF/resolve/main/Qwen2.5-32b-RP-Ink.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32b-RP-Ink-i1-GGUF/resolve/main/Qwen2.5-32b-RP-Ink.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32b-RP-Ink-i1-GGUF/resolve/main/Qwen2.5-32b-RP-Ink.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32b-RP-Ink-i1-GGUF/resolve/main/Qwen2.5-32b-RP-Ink.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32b-RP-Ink-i1-GGUF/resolve/main/Qwen2.5-32b-RP-Ink.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32b-RP-Ink-i1-GGUF/resolve/main/Qwen2.5-32b-RP-Ink.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32b-RP-Ink-i1-GGUF/resolve/main/Qwen2.5-32b-RP-Ink.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32b-RP-Ink-i1-GGUF/resolve/main/Qwen2.5-32b-RP-Ink.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32b-RP-Ink-i1-GGUF/resolve/main/Qwen2.5-32b-RP-Ink.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32b-RP-Ink-i1-GGUF/resolve/main/Qwen2.5-32b-RP-Ink.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32b-RP-Ink-i1-GGUF/resolve/main/Qwen2.5-32b-RP-Ink.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-32b-RP-Ink-i1-GGUF/resolve/main/Qwen2.5-32b-RP-Ink.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/MT-Max-Merge_02012025163610-BI-gemma-2-9B-GGUF | mradermacher | 2025-01-03T08:13:04Z | 22 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:zelk12/MT-Max-Merge_02012025163610-BI-gemma-2-9B",
"base_model:quantized:zelk12/MT-Max-Merge_02012025163610-BI-gemma-2-9B",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-03T06:53:28Z | ---
base_model: zelk12/MT-Max-Merge_02012025163610-BI-gemma-2-9B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/zelk12/MT-Max-Merge_02012025163610-BI-gemma-2-9B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-BI-gemma-2-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-BI-gemma-2-9B.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-BI-gemma-2-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-BI-gemma-2-9B.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-BI-gemma-2-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-BI-gemma-2-9B.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-BI-gemma-2-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-BI-gemma-2-9B.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-BI-gemma-2-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-BI-gemma-2-9B.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-BI-gemma-2-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-BI-gemma-2-9B.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-BI-gemma-2-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-BI-gemma-2-9B.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-BI-gemma-2-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-BI-gemma-2-9B.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-BI-gemma-2-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-BI-gemma-2-9B.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-BI-gemma-2-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-BI-gemma-2-9B.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-BI-gemma-2-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-BI-gemma-2-9B.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-BI-gemma-2-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-BI-gemma-2-9B.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ontocord/riverbed | ontocord | 2025-01-03T08:10:17Z | 8 | 4 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2023-05-13T05:21:40Z | ---
license: apache-2.0
---
These are basic classifiers and a BM25 index of Wikipedia used for data tooling research.
Using kenhktsui/llm-data-textbook-quality-fasttext-classifer-v1's classifier (MIT) and TurkuNLP's register classifiers.
```
import fasttext, os
if not os.path.exists("expert_classify.ftz"):
os.system("wget http://dl.turkunlp.org/register-labeling-model/fasttext_model.bin")
os.system("wget https://huggingface.co/ontocord/riverbed/resolve/main/rj_model.bin")
os.system("wget https://huggingface.co/kenhktsui/llm-data-textbook-quality-fasttext-classifer-v1/resolve/main/model_textbook_quality.bin")
os.system("wget https://huggingface.co/ontocord/riverbed/resolve/main/expert_classify.ftz")
### red pajama filter. pred_label "__label__wiki" is data we do not wish to keep.
red_pajama_model = fasttext.load_model("rj_model.bin")
(pred_label, pred_prob) = red_pajama_model.predict(text)
if pred_label == "__label__cc":
pred_prob = 1 - pred_prob
### turkunlp registry labeler: https://github.com/TurkuNLP/register-labeling
domain_model = fasttext.load_model("fasttext_model.bin")
(pred_label, pred_prob) = domain_model.predict(text)
### Pile domain such as github, arxiv, etc.
pile_model = fasttext.load_model("expert_classify.ftz")
(pred_label, pred_prob) = pile_model.predict(text)
### Textbook quality - e.g., textbooks are all you need
textbook_model = fasttext.load_model("model_textbook_quality.bin")
(pred_label, pred_prob) = pile_model.predict(text)
```
See the files here: https://huggingface.co/ontocord/riverbed/tree/main
This includes a a small whoosh search index of wikidata useful for background knowledge for LLMs.
installation:
```import os
if not os.path.exists("./wikidata_bm25_whoosh"):
os.system("git clone https://huggingface.co/ontocord/riverbed")
os.system("pip install -q whoosh")
import whoosh.index as whoosh_index
from whoosh.qparser import QueryParser
from whoosh.analysis import StemmingAnalyzer, Filter
class MyFilter(Filter):
def __call__(self, tokens):
for t in tokens:
t.text = t.text.lower()
if len(t.text) > 5:
yield t
t.text = t.text[:5]
yield t
try:
if qp is None: assert False
except:
bm25_dir = "./riverbed"
index = whoosh_index.open_dir(bm25_dir)
searcher = index.searcher()
qp = QueryParser("content", schema=index.schema)
``` |
Rich-J/subnet29_upload_c01_Jan3_2 | Rich-J | 2025-01-03T08:05:48Z | 346 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-03T07:39:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jonny001/WBMH-v1.1 | Jonny001 | 2025-01-03T08:05:42Z | 2,537 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"NSFW",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
]
| text-to-image | 2025-01-03T06:41:28Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- NSFW
widget:
- text: '-'
output:
url: images/1.jpg
- text: '-'
output:
url: images/2.jpg
- text: '-'
output:
url: images/3.jpg
- text: '-'
output:
url: images/4.jpg
- text: '-'
output:
url: images/5.jpg
- text: '-'
output:
url: images/6.jpg
- text: '-'
output:
url: images/7.jpg
- text: '-'
output:
url: images/8.jpg
- text: '-'
output:
url: images/9.jpg
- text: '-'
output:
url: images/10.jpg
- text: '-'
output:
url: images/11.jpg
- text: '-'
output:
url: images/12.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
### ⚠ This model has the capability to generate NSFW images. Use responsibly.
# Sample Images
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonny001/WBMH-v1.1/tree/main) them in the Files & versions tab.
-------------------------------------------------------------------------------------
## Credits
Click [Here](https://civitai.com/models/1092141/wbmh-flux)
|
kapsb2171/modernbert-llm-router | kapsb2171 | 2025-01-03T08:03:49Z | 85 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-01-03T04:02:25Z | ---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: modernbert-llm-router
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modernbert-llm-router
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| No log | 1.0 | 71 | 0.0000 | 1.0 |
| 0.0453 | 2.0 | 142 | 0.0000 | 1.0 |
| 0.0 | 3.0 | 213 | 0.0000 | 1.0 |
| 0.0 | 4.0 | 284 | 0.0000 | 1.0 |
| 0.0 | 5.0 | 355 | 0.0000 | 1.0 |
### Framework versions
- Transformers 4.48.0.dev0
- Pytorch 2.5.0+cu124
- Datasets 3.1.0
- Tokenizers 0.21.0
|
matrixportal/wiroai-turkish-llm-9b-Q4_K_M-GGUF | matrixportal | 2025-01-03T08:00:17Z | 38 | 1 | transformers | [
"transformers",
"gguf",
"conversational",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"tr",
"base_model:WiroAI/wiroai-turkish-llm-9b",
"base_model:quantized:WiroAI/wiroai-turkish-llm-9b",
"license:gemma",
"model-index",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-03T07:59:53Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
tags:
- conversational
- llama-cpp
- gguf-my-repo
base_model: WiroAI/wiroai-turkish-llm-9b
language:
- tr
model-index:
- name: wiroai-turkish-llm-9b
results:
- task:
type: multiple-choice
dataset:
name: MMLU_TR_V0.2
type: multiple-choice
metrics:
- type: 5-shot
value: 0.5982
name: 5-shot
verified: false
- type: 0-shot
value: 0.4991
name: 0-shot
verified: false
- type: 25-shot
value: 0.5367
name: 25-shot
verified: false
- type: 10-shot
value: 0.5701
name: 10-shot
verified: false
- type: 5-shot
value: 0.6682
name: 5-shot
verified: false
- type: 5-shot
value: 0.6058
name: 5-shot
verified: false
---
# matrixportal/wiroai-turkish-llm-9b-Q4_K_M-GGUF
This model was converted to GGUF format from [`WiroAI/wiroai-turkish-llm-9b`](https://huggingface.co/WiroAI/wiroai-turkish-llm-9b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/WiroAI/wiroai-turkish-llm-9b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo matrixportal/wiroai-turkish-llm-9b-Q4_K_M-GGUF --hf-file wiroai-turkish-llm-9b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo matrixportal/wiroai-turkish-llm-9b-Q4_K_M-GGUF --hf-file wiroai-turkish-llm-9b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo matrixportal/wiroai-turkish-llm-9b-Q4_K_M-GGUF --hf-file wiroai-turkish-llm-9b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo matrixportal/wiroai-turkish-llm-9b-Q4_K_M-GGUF --hf-file wiroai-turkish-llm-9b-q4_k_m.gguf -c 2048
```
|
mradermacher/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B-GGUF | mradermacher | 2025-01-03T08:00:05Z | 53 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:zelk12/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B",
"base_model:quantized:zelk12/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-03T07:31:59Z | ---
base_model: zelk12/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/zelk12/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-GP-gemma-2-MTg4MT5g4-9B.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
johahi/flashzoi-replicate-0 | johahi | 2025-01-03T07:58:18Z | 4,269 | 0 | null | [
"pytorch",
"safetensors",
"borzoi",
"biology",
"genomics",
"license:mit",
"region:us"
]
| null | 2024-10-25T13:16:06Z | ---
license: mit
tags:
- biology
- genomics
--- |
KoichiYasuoka/roberta-base-chinese-upos | KoichiYasuoka | 2025-01-03T07:57:12Z | 106 | 2 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"chinese",
"pos",
"dependency-parsing",
"zh",
"dataset:universal_dependencies",
"base_model:KoichiYasuoka/roberta-base-chinese",
"base_model:finetune:KoichiYasuoka/roberta-base-chinese",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-12-15T13:06:00Z | ---
language:
- "zh"
tags:
- "chinese"
- "token-classification"
- "pos"
- "dependency-parsing"
base_model: KoichiYasuoka/roberta-base-chinese
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
---
# roberta-base-chinese-upos
## Model Description
This is a RoBERTa model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from [roberta-base-chinese](https://huggingface.co/KoichiYasuoka/roberta-base-chinese). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-chinese-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-chinese-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-base-chinese-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
KoichiYasuoka/deberta-xlarge-chinese-erlangshen-upos | KoichiYasuoka | 2025-01-03T07:57:09Z | 16 | 1 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"token-classification",
"chinese",
"pos",
"dependency-parsing",
"zh",
"dataset:universal_dependencies",
"base_model:IDEA-CCNL/Erlangshen-DeBERTa-v2-710M-Chinese",
"base_model:finetune:IDEA-CCNL/Erlangshen-DeBERTa-v2-710M-Chinese",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-01-03T09:09:49Z | ---
language:
- "zh"
tags:
- "chinese"
- "token-classification"
- "pos"
- "dependency-parsing"
base_model: IDEA-CCNL/Erlangshen-DeBERTa-v2-710M-Chinese
datasets:
- "universal_dependencies"
license: "apache-2.0"
pipeline_tag: "token-classification"
---
# deberta-xlarge-chinese-erlangshen-upos
## Model Description
This is a DeBERTa(V2) model pre-trained on Chinese texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from [Erlangshen-DeBERTa-v2-710M-Chinese](https://huggingface.co/IDEA-CCNL/Erlangshen-DeBERTa-v2-710M-Chinese). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-xlarge-chinese-erlangshen-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/deberta-xlarge-chinese-erlangshen-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/deberta-xlarge-chinese-erlangshen-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
KoichiYasuoka/chinese-bert-wwm-ext-upos | KoichiYasuoka | 2025-01-03T07:56:57Z | 112 | 8 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"chinese",
"pos",
"wikipedia",
"dependency-parsing",
"zh",
"dataset:universal_dependencies",
"base_model:hfl/chinese-bert-wwm-ext",
"base_model:finetune:hfl/chinese-bert-wwm-ext",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:04Z | ---
language:
- "zh"
tags:
- "chinese"
- "token-classification"
- "pos"
- "wikipedia"
- "dependency-parsing"
base_model: hfl/chinese-bert-wwm-ext
datasets:
- "universal_dependencies"
license: "apache-2.0"
pipeline_tag: "token-classification"
---
# chinese-bert-wwm-ext-upos
## Model Description
This is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from [chinese-bert-wwm-ext](https://huggingface.co/hfl/chinese-bert-wwm-ext). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/chinese-bert-wwm-ext-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/chinese-bert-wwm-ext-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/chinese-bert-wwm-ext-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
matrixportal/wiroai-turkish-llm-9b-Q4_K_S-GGUF | matrixportal | 2025-01-03T07:56:37Z | 11 | 0 | transformers | [
"transformers",
"gguf",
"conversational",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"tr",
"base_model:WiroAI/wiroai-turkish-llm-9b",
"base_model:quantized:WiroAI/wiroai-turkish-llm-9b",
"license:gemma",
"model-index",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-03T07:56:12Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
tags:
- conversational
- llama-cpp
- gguf-my-repo
base_model: WiroAI/wiroai-turkish-llm-9b
language:
- tr
model-index:
- name: wiroai-turkish-llm-9b
results:
- task:
type: multiple-choice
dataset:
name: MMLU_TR_V0.2
type: multiple-choice
metrics:
- type: 5-shot
value: 0.5982
name: 5-shot
verified: false
- type: 0-shot
value: 0.4991
name: 0-shot
verified: false
- type: 25-shot
value: 0.5367
name: 25-shot
verified: false
- type: 10-shot
value: 0.5701
name: 10-shot
verified: false
- type: 5-shot
value: 0.6682
name: 5-shot
verified: false
- type: 5-shot
value: 0.6058
name: 5-shot
verified: false
---
# matrixportal/wiroai-turkish-llm-9b-Q4_K_S-GGUF
This model was converted to GGUF format from [`WiroAI/wiroai-turkish-llm-9b`](https://huggingface.co/WiroAI/wiroai-turkish-llm-9b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/WiroAI/wiroai-turkish-llm-9b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo matrixportal/wiroai-turkish-llm-9b-Q4_K_S-GGUF --hf-file wiroai-turkish-llm-9b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo matrixportal/wiroai-turkish-llm-9b-Q4_K_S-GGUF --hf-file wiroai-turkish-llm-9b-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo matrixportal/wiroai-turkish-llm-9b-Q4_K_S-GGUF --hf-file wiroai-turkish-llm-9b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo matrixportal/wiroai-turkish-llm-9b-Q4_K_S-GGUF --hf-file wiroai-turkish-llm-9b-q4_k_s.gguf -c 2048
```
|
tuanna08go/8d71750f-4381-4439-b4c9-b191859e6304 | tuanna08go | 2025-01-03T07:56:15Z | 7 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B",
"base_model:adapter:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"region:us"
]
| null | 2025-01-03T07:34:45Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8d71750f-4381-4439-b4c9-b191859e6304
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2d8416ab23c11ed2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2d8416ab23c11ed2_train_data.json
type:
field_input: positive
field_instruction: anchor
field_output: negative
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: false
group_by_length: false
hub_model_id: tuanna08go/8d71750f-4381-4439-b4c9-b191859e6304
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 8
mlflow_experiment_name: /tmp/2d8416ab23c11ed2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8d71750f-4381-4439-b4c9-b191859e6304
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8d71750f-4381-4439-b4c9-b191859e6304
warmup_steps: 2
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8d71750f-4381-4439-b4c9-b191859e6304
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4498
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 2
- training_steps: 45
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0222 | 1 | 1.8411 |
| No log | 0.2 | 9 | 1.0835 |
| 1.536 | 0.4 | 18 | 0.5607 |
| 0.689 | 0.6 | 27 | 0.4736 |
| 0.4522 | 0.8 | 36 | 0.4535 |
| 0.443 | 1.0 | 45 | 0.4498 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
AIR-hl/Qwen2.5-1.5B-SimPO | AIR-hl | 2025-01-03T07:53:51Z | 154 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"trl",
"qwen",
"simpo",
"alignment",
"custome",
"chat",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:AIR-hl/Qwen2.5-1.5B-ultrachat200k",
"base_model:finetune:AIR-hl/Qwen2.5-1.5B-ultrachat200k",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-03T07:35:11Z | ---
license: apache-2.0
datasets:
- HuggingFaceH4/ultrafeedback_binarized
base_model:
- AIR-hl/Qwen2.5-1.5B-ultrachat200k
pipeline_tag: text-generation
tags:
- trl
- qwen
- simpo
- alignment
- transformers
- custome
- chat
---
# Qwen2.5-1.5B-SimPO
## Model Details
- **Model type:** aligned model
- **License:** Apache license 2.0
- **Finetuned from model:** [AIR-hl/Qwen2.5-1.5B-ultrachat200k](https://huggingface.co/AIR-hl/Qwen2.5-1.5B-ultrachat200k)
- **Training data:** [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)
- **Training framework:** [trl](https://github.com/huggingface/trl)
## Training Details
devices: 4 * NPU 910B-64GB \
precision: bf16 mixed-precision \
global_batch_size: 128
### Training Hyperparameters
`beta`: 1 \
`gamma`: 0.1 \
`bf16`: True \
`learning_rate`: 1e-6 \
`lr_scheduler_type`: cosine \
`per_device_train_batch_size`: 16 \
`gradient_accumulation_steps`: 2 \
`torch_dtype`: bfloat16 \
`num_train_epochs`: 1 \
`max_prompt_length`: 512 \
`max_length`: 1024 \
`warmup_ratio`: 0.05
### Results
`init_train_loss`: 0.7551 \
`final_train_loss`: 0.6715 \
`accuracy`: 0.6375 \
`reward_margin`: 0.3633
### Training script
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoTokenizer
from trl import (
CPOConfig,
CPOTrainer,
ModelConfig,
ScriptArguments,
TrlParser,
get_kbit_device_map,
get_peft_config,
get_quantization_config,
)
from trl.trainer.utils import SIMPLE_CHAT_TEMPLATE
if __name__ == "__main__":
parser = TrlParser((ScriptArguments, CPOConfig, ModelConfig))
script_args, training_args, model_config = parser.parse_args_and_config()
torch_dtype = (
model_config.torch_dtype
if model_config.torch_dtype in ["auto", None]
else getattr(torch, model_config.torch_dtype)
)
quantization_config = get_quantization_config(model_config)
model_kwargs = dict(
revision=model_config.model_revision,
attn_implementation=model_config.attn_implementation,
torch_dtype=torch_dtype,
use_cache=False if training_args.gradient_checkpointing else True,
device_map=get_kbit_device_map() if quantization_config is not None else None,
quantization_config=quantization_config,
)
model = AutoModelForCausalLM.from_pretrained(
model_config.model_name_or_path, trust_remote_code=model_config.trust_remote_code, **model_kwargs
)
peft_config = get_peft_config(model_config)
tokenizer = AutoTokenizer.from_pretrained(
model_config.model_name_or_path, trust_remote_code=model_config.trust_remote_code
)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
if tokenizer.chat_template is None:
tokenizer.chat_template = SIMPLE_CHAT_TEMPLATE
if script_args.ignore_bias_buffers:
model._ddp_params_and_buffers_to_ignore = [
name for name, buffer in model.named_buffers() if buffer.dtype == torch.bool
]
dataset=load_dataset(script_args.dataset_name,
split=script_args.dataset_train_split)
dataset=dataset.select_columns(['prompt', 'chosen', 'rejected'])
trainer = CPOTrainer(
model,
args=training_args,
train_dataset=dataset,
processing_class=tokenizer,
peft_config=peft_config,
)
trainer.train()
trainer.save_model(training_args.output_dir)
```
|
QuantFactory/Triangulum-1B-GGUF | QuantFactory | 2025-01-03T07:51:38Z | 154 | 2 | transformers | [
"transformers",
"gguf",
"triangulum_1b",
"sft",
"chain_of_thought",
"ollama",
"text-generation-inference",
"llama_for_causal_lm",
"reasoning",
"CoT",
"text-generation",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"license:creativeml-openrail-m",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2025-01-03T07:43:31Z |
---
license: creativeml-openrail-m
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
pipeline_tag: text-generation
tags:
- triangulum_1b
- sft
- chain_of_thought
- ollama
- text-generation-inference
- llama_for_causal_lm
- reasoning
- CoT
library_name: transformers
metrics:
- code_eval
- accuracy
- competition_math
- character
---
[](https://hf.co/QuantFactory)
# QuantFactory/Triangulum-1B-GGUF
This is quantized version of [prithivMLmods/Triangulum-1B](https://huggingface.co/prithivMLmods/Triangulum-1B) created using llama.cpp
# Original Model Card

<pre align="center">
__ .__ .__
_/ |_ _______ |__|_____ ____ ____ __ __ | | __ __ _____
\ __\\_ __ \| |\__ \ / \ / ___\ | | \| | | | \ / \
| | | | \/| | / __ \_| | \/ /_/ >| | /| |__| | /| Y Y \
|__| |__| |__|(____ /|___| /\___ / |____/ |____/|____/ |__|_| /
\/ \//_____/ \/
</pre>
# **Triangulum 1B: Multilingual Large Language Models (LLMs)**
Triangulum 1B is a collection of pretrained and instruction-tuned generative models, designed for multilingual applications. These models are trained using synthetic datasets based on long chains of thought, enabling them to perform complex reasoning tasks effectively.
# **Key Features & Model Architecture**
- **Foundation Model**: Built upon LLaMA's autoregressive language model, leveraging an optimized transformer architecture for enhanced performance.
- **Instruction Tuning**: Includes supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align model outputs with human preferences for helpfulness and safety.
- **Multilingual Support**: Designed to handle multiple languages, ensuring broad applicability across diverse linguistic contexts.
---
- Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
# **Training Approach**
1. **Synthetic Datasets**: Utilizes long chain-of-thought synthetic data to enhance reasoning capabilities.
2. **Supervised Fine-Tuning (SFT)**: Aligns the model to specific tasks through curated datasets.
3. **Reinforcement Learning with Human Feedback (RLHF)**: Ensures the model adheres to human values and safety guidelines through iterative training processes.
# **How to use with transformers**
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import torch
from transformers import pipeline
model_id = "prithivMLmods/Triangulum-1B"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are the kind and tri-intelligent assistant helping people to understand complex concepts."},
{"role": "user", "content": "Who are you?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
# **Demo Inference LlamaForCausalLM**
```python
import torch
from transformers import AutoTokenizer, LlamaForCausalLM
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained('prithivMLmods/Triangulum-1B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"prithivMLmods/Triangulum-1B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
# Define a list of system and user prompts
prompts = [
"""<|im_start|>system
You are the kind and tri-intelligent assistant helping people to understand complex concepts.<|im_end|>
<|im_start|>user
Can you explain the concept of eigenvalues and eigenvectors in a simple way?<|im_end|>
<|im_start|>assistant"""
]
# Generate responses for each prompt
for chat in prompts:
print(f"Prompt:\n{chat}\n")
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response:\n{response}\n{'-'*80}\n")
```
# **Key Adjustments**
1. **System Prompts:** Each prompt defines a different role or persona for the AI to adopt.
2. **User Prompts:** These specify the context or task for the assistant, ranging from teaching to storytelling or career advice.
3. **Looping Through Prompts:** Each prompt is processed in a loop to showcase the model's versatility.
You can expand the list of prompts to explore a variety of scenarios and responses.
# **Use Cases for T5B**
- Multilingual content generation
- Question answering and dialogue systems
- Text summarization and analysis
- Translation and localization tasks
# **Technical Details**
Triangulum 1B employs a state-of-the-art autoregressive architecture inspired by LLaMA. The optimized transformer framework ensures both efficiency and scalability, making it suitable for a variety of use cases.
# **How to Run Triangulum 5B on Ollama Locally**
```markdown
# How to Run Ollama Locally
This guide demonstrates the power of using open-source LLMs locally, showcasing examples with different open-source models for various use cases. By the end, you'll be equipped to run any future open-source LLM models with ease.
---
## Example 1: How to Run the Triangulum-1B Model
The **Triangulum-10B** model is an open-source LLM known for its capabilities across text-based tasks. We'll interact with it similarly to ChatGPT, but run it locally with support for quants.
### Step 1: Download the Model
First, download the **Triangulum-1B-F16.gguf** model using the following command:
```bash
ollama run triangulum-1b-f16.gguf
```
### Step 2: Model Initialization and Download
Upon running the command, Ollama will initialize and download the model files. You should see output similar to the following:
```plaintext
pulling manifest
pulling 8934d96d3f08... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏ 3.8 GB
pulling 8c17c2ebb0ea... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏ 7.0 KB
pulling 7c23fb36d801... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏ 4.8 KB
pulling 2e0493f67d0c... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏ 59 B
pulling fa304d675061... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏ 91 B
pulling 42ba7f8a01dd... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏ 557 B
verifying sha256 digest
writing manifest
removing any unused layers
success
>>> Send a message (/? for help)
```
### Step 3: Interact with the Model
Once the model is ready, you can send a prompt. For example, let's ask:
```plaintext
>>> What can you do for me?
```
If you asked the same question, you should receive a response like this:
```plaintext
As a responsible AI language model, I am here to assist you with any questions or tasks you may have. Here are some examples of things I can help with:
1. Answering questions: I can provide information on a wide range of topics, from science and technology to history and culture.
2. Generating ideas: I can help you brainstorm ideas for creative projects, or provide suggestions for solving problems.
3. Writing assistance: I can help you with writing tasks such as proofreading, editing, and suggesting alternative words or phrases.
4. Translation: I can translate text from one language to another.
5. Summarizing content: I can summarize long pieces of text, such as articles or documents, into shorter, more digestible versions.
6. Creativity: I can help you generate creative ideas for stories, poems, or other forms of writing.
7. Language learning: I can assist you in learning a new language by providing grammar explanations, vocabulary lists, and practice exercises.
8. Chatting: I'm here to chat with you and provide a response to any question or topic you'd like to discuss.
Please let me know if there is anything specific you would like me to help you with.
```
### Step 4: Exit the Program
To exit the program, simply type:
```plaintext
/exit
```
## Example 2: Running Multi-Modal Models (Future Use)
Ollama supports running multi-modal models where you can send images and ask questions based on them. This section will be updated as more models become available.
## Notes on Using Quantized Models
Quantized models like **triangulum-1b-f16.gguf** are optimized for performance on resource-constrained hardware, making it accessible for local inference.
1. Ensure your system has sufficient VRAM or CPU resources.
2. Use the `.gguf` model format for compatibility with Ollama.
# **Conclusion**
Running the **Triangulum-5B** model with Ollama provides a robust way to leverage open-source LLMs locally for diverse use cases. By following these steps, you can explore the capabilities of other open-source models in the future.
|
AIR-hl/Qwen2.5-1.5B-WPO | AIR-hl | 2025-01-03T07:46:38Z | 145 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"trl",
"qwen",
"wpo",
"alignment",
"custome",
"chat",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:AIR-hl/Qwen2.5-1.5B-ultrachat200k",
"base_model:finetune:AIR-hl/Qwen2.5-1.5B-ultrachat200k",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-02T05:47:11Z | ---
license: apache-2.0
datasets:
- HuggingFaceH4/ultrafeedback_binarized
base_model:
- AIR-hl/Qwen2.5-1.5B-ultrachat200k
pipeline_tag: text-generation
tags:
- trl
- qwen
- wpo
- alignment
- transformers
- custome
- chat
---
# Qwen2.5-1.5B-WPO
## Model Details
- **Model type:** aligned model
- **License:** Apache license 2.0
- **Finetuned from model:** [AIR-hl/Qwen2.5-1.5B-ultrachat200k](https://huggingface.co/AIR-hl/Qwen2.5-1.5B-ultrachat200k)
- **Training data:** [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)
- **Training framework:** [trl](https://github.com/huggingface/trl)
## Training Details
devices: 4 * NPU 910B-64GB \
precision: bf16 mixed-precision \
global_batch_size: 128
### Training Hyperparameters
`attn_implementation`: None \
`beta`: 0.01 \
`bf16`: True \
`learning_rate`: 1e-6 \
`lr_scheduler_type`: cosine \
`per_device_train_batch_size`: 8 \
`gradient_accumulation_steps`: 4 \
`torch_dtype`: bfloat16 \
`num_train_epochs`: 1 \
`max_prompt_length`: 512 \
`max_length`: 1024 \
`warmup_ratio`: 0.05
### Results
`init_train_loss`: 0.2410 \
`final_train_loss`: 0.1367 \
`accuracy`: 0.65 \
`reward_margin`: 0.2402
### Training script
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoTokenizer
import multiprocessing
from trl import (
DPOConfig,
DPOTrainer,
ModelConfig,
ScriptArguments,
TrlParser,
get_kbit_device_map,
get_peft_config,
get_quantization_config,
)
from trl.trainer.utils import SIMPLE_CHAT_TEMPLATE
if __name__ == "__main__":
parser = TrlParser((ScriptArguments, DPOConfig, ModelConfig))
script_args, training_args, model_config = parser.parse_args_and_config()
torch_dtype = (
model_config.torch_dtype
if model_config.torch_dtype in ["auto", None]
else getattr(torch, model_config.torch_dtype)
)
quantization_config = get_quantization_config(model_config)
model_kwargs = dict(
revision=model_config.model_revision,
attn_implementation=model_config.attn_implementation,
torch_dtype=torch_dtype,
use_cache=False if training_args.gradient_checkpointing else True,
device_map=get_kbit_device_map() if quantization_config is not None else None,
quantization_config=quantization_config,
)
model = AutoModelForCausalLM.from_pretrained(
model_config.model_name_or_path, trust_remote_code=model_config.trust_remote_code, **model_kwargs
)
peft_config = get_peft_config(model_config)
if peft_config is None:
ref_model = AutoModelForCausalLM.from_pretrained(
model_config.model_name_or_path, trust_remote_code=model_config.trust_remote_code, **model_kwargs
)
else:
ref_model = None
tokenizer = AutoTokenizer.from_pretrained(
model_config.model_name_or_path, trust_remote_code=model_config.trust_remote_code
)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
if tokenizer.chat_template is None:
tokenizer.chat_template = SIMPLE_CHAT_TEMPLATE
if script_args.ignore_bias_buffers:
model._ddp_params_and_buffers_to_ignore = [
name for name, buffer in model.named_buffers() if buffer.dtype == torch.bool
]
dataset = load_dataset(script_args.dataset_name,
split=script_args.dataset_train_split)
dataset=dataset.select_columns(['chosen', 'prompt', 'rejected'])
trainer = DPOTrainer(
model,
ref_model,
args=training_args,
train_dataset=dataset,
processing_class=tokenizer,
peft_config=peft_config,
)
trainer.train()
trainer.save_model(training_args.output_dir)
```
|
Rich-J/subnet29_upload_c01_Jan3_0 | Rich-J | 2025-01-03T07:45:38Z | 429 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-03T07:40:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KoichiYasuoka/roberta-base-thai-syllable-upos | KoichiYasuoka | 2025-01-03T07:44:25Z | 116 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"thai",
"pos",
"wikipedia",
"dependency-parsing",
"th",
"dataset:universal_dependencies",
"base_model:KoichiYasuoka/roberta-base-thai-syllable",
"base_model:finetune:KoichiYasuoka/roberta-base-thai-syllable",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:04Z | ---
language:
- "th"
tags:
- "thai"
- "token-classification"
- "pos"
- "wikipedia"
- "dependency-parsing"
base_model: KoichiYasuoka/roberta-base-thai-syllable
datasets:
- "universal_dependencies"
license: "apache-2.0"
pipeline_tag: "token-classification"
widget:
- text: "หลายหัวดีกว่าหัวเดียว"
---
# roberta-base-thai-syllable-upos
## Model Description
This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from [roberta-base-thai-syllable](https://huggingface.co/KoichiYasuoka/roberta-base-thai-syllable). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-syllable-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-thai-syllable-upos")
s="หลายหัวดีกว่าหัวเดียว"
t=tokenizer.tokenize(s)
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(t,p)))
```
or
```
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-base-thai-syllable-upos")
print(nlp("หลายหัวดีกว่าหัวเดียว"))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
joannakhek/SmolLM2-FT-MyDataset | joannakhek | 2025-01-03T07:43:17Z | 147 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"smol-course",
"module_1",
"trl",
"sft",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-03T07:42:46Z | ---
base_model: HuggingFaceTB/SmolLM2-135M
library_name: transformers
model_name: SmolLM2-FT-MyDataset
tags:
- generated_from_trainer
- smol-course
- module_1
- trl
- sft
licence: license
---
# Model Card for SmolLM2-FT-MyDataset
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joannakhek/SmolLM2-FT-MyDataset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.3
- Pytorch: 2.4.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B-GGUF | mradermacher | 2025-01-03T07:37:18Z | 18 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:zelk12/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B",
"base_model:quantized:zelk12/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-03T07:08:22Z | ---
base_model: zelk12/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/zelk12/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MM-gemma-2-MT5g4MTM4-9B.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/electric-sheep-7b-alpha-GGUF | mradermacher | 2025-01-03T07:28:20Z | 52 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"en",
"dataset:maldv/cyberpunk",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:Weyaxi/sci-datasets",
"dataset:maldv/conversation-cixot",
"base_model:maldv/electric-sheep-7b-alpha",
"base_model:quantized:maldv/electric-sheep-7b-alpha",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-03T00:07:03Z | ---
base_model: maldv/electric-sheep-7b-alpha
datasets:
- maldv/cyberpunk
- microsoft/orca-math-word-problems-200k
- Weyaxi/sci-datasets
- maldv/conversation-cixot
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/maldv/electric-sheep-7b-alpha
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/electric-sheep-7b-alpha-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/electric-sheep-7b-alpha-GGUF/resolve/main/electric-sheep-7b-alpha.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/electric-sheep-7b-alpha-GGUF/resolve/main/electric-sheep-7b-alpha.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/electric-sheep-7b-alpha-GGUF/resolve/main/electric-sheep-7b-alpha.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/electric-sheep-7b-alpha-GGUF/resolve/main/electric-sheep-7b-alpha.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/electric-sheep-7b-alpha-GGUF/resolve/main/electric-sheep-7b-alpha.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/electric-sheep-7b-alpha-GGUF/resolve/main/electric-sheep-7b-alpha.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/electric-sheep-7b-alpha-GGUF/resolve/main/electric-sheep-7b-alpha.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/electric-sheep-7b-alpha-GGUF/resolve/main/electric-sheep-7b-alpha.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/electric-sheep-7b-alpha-GGUF/resolve/main/electric-sheep-7b-alpha.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/electric-sheep-7b-alpha-GGUF/resolve/main/electric-sheep-7b-alpha.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/electric-sheep-7b-alpha-GGUF/resolve/main/electric-sheep-7b-alpha.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/electric-sheep-7b-alpha-GGUF/resolve/main/electric-sheep-7b-alpha.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
KoichiYasuoka/roberta-base-vietnamese-upos | KoichiYasuoka | 2025-01-03T07:27:21Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"vietnamese",
"pos",
"dependency-parsing",
"vi",
"dataset:universal_dependencies",
"base_model:KoichiYasuoka/roberta-base-vietnamese",
"base_model:finetune:KoichiYasuoka/roberta-base-vietnamese",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-12-14T02:54:06Z | ---
language:
- "vi"
tags:
- "vietnamese"
- "token-classification"
- "pos"
- "dependency-parsing"
base_model: KoichiYasuoka/roberta-base-vietnamese
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "Hai cái đầu thì tốt hơn một."
---
# roberta-base-vietnamese-upos
## Model Description
This is a RoBERTa model pre-trained on Vietnamese texts for POS-tagging and dependency-parsing, derived from [roberta-base-vietnamese](https://huggingface.co/KoichiYasuoka/roberta-base-vietnamese). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/)(Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-vietnamese-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-vietnamese-upos")
pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple")
nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)]
print(nlp("Hai cái đầu thì tốt hơn một."))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-base-vietnamese-upos")
print(nlp("Hai cái đầu thì tốt hơn một."))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
HIT-TMG/KaLM-embedding-multilingual-mini-v1 | HIT-TMG | 2025-01-03T07:26:50Z | 4,530 | 19 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"qwen2",
"feature-extraction",
"sentence-similarity",
"mteb",
"arxiv:2501.01028",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-08-27T08:56:33Z | ---
license: mit
model-index:
- name: KaLM-Embedding
results:
- dataset:
config: en-ext
name: MTEB AmazonCounterfactualClassification (en-ext)
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
split: test
type: mteb/amazon_counterfactual
metrics:
- type: accuracy
value: 74.16041979010495
- type: ap
value: 22.731316107205824
- type: ap_weighted
value: 22.731316107205824
- type: f1
value: 61.311184650259634
- type: f1_weighted
value: 78.92070802470501
- type: main_score
value: 74.16041979010495
task:
type: Classification
- dataset:
config: en
name: MTEB AmazonCounterfactualClassification (en)
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
split: test
type: mteb/amazon_counterfactual
metrics:
- type: accuracy
value: 72.35820895522387
- type: ap
value: 34.13026440006763
- type: ap_weighted
value: 34.13026440006763
- type: f1
value: 65.91101941691169
- type: f1_weighted
value: 74.90947851184335
- type: main_score
value: 72.35820895522387
task:
type: Classification
- dataset:
config: default
name: MTEB AmazonPolarityClassification
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
split: test
type: mteb/amazon_polarity
metrics:
- type: accuracy
value: 95.2693
- type: ap
value: 93.69278757537118
- type: ap_weighted
value: 93.69278757537118
- type: f1
value: 95.26705627226383
- type: f1_weighted
value: 95.26705627226384
- type: main_score
value: 95.2693
task:
type: Classification
- dataset:
config: en
name: MTEB AmazonReviewsClassification (en)
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
split: test
type: mteb/amazon_reviews_multi
metrics:
- type: accuracy
value: 51.01
- type: f1
value: 48.69903082137716
- type: f1_weighted
value: 48.69903082137716
- type: main_score
value: 51.01
task:
type: Classification
- dataset:
config: default
name: MTEB ArguAna
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
split: test
type: mteb/arguana
metrics:
- type: main_score
value: 56.713
- type: map_at_1
value: 31.436999999999998
- type: map_at_10
value: 47.632000000000005
- type: map_at_100
value: 48.418
- type: map_at_1000
value: 48.421
- type: map_at_20
value: 48.274
- type: map_at_3
value: 42.568
- type: map_at_5
value: 45.473
- type: mrr_at_1
value: 31.934566145092464
- type: mrr_at_10
value: 47.80803359750735
- type: mrr_at_100
value: 48.594181951484266
- type: mrr_at_1000
value: 48.59689299100106
- type: mrr_at_20
value: 48.450028297368256
- type: mrr_at_3
value: 42.7453769559033
- type: mrr_at_5
value: 45.625889046941744
- type: nauc_map_at_1000_diff1
value: 11.309764384647323
- type: nauc_map_at_1000_max
value: -12.696935142377729
- type: nauc_map_at_1000_std
value: -12.712119206533423
- type: nauc_map_at_100_diff1
value: 11.311862879869643
- type: nauc_map_at_100_max
value: -12.688064356825764
- type: nauc_map_at_100_std
value: -12.708245196445258
- type: nauc_map_at_10_diff1
value: 11.180369964075947
- type: nauc_map_at_10_max
value: -12.557609097774142
- type: nauc_map_at_10_std
value: -12.86587547951096
- type: nauc_map_at_1_diff1
value: 13.545199807116537
- type: nauc_map_at_1_max
value: -15.05694303234355
- type: nauc_map_at_1_std
value: -13.135999468701948
- type: nauc_map_at_20_diff1
value: 11.301805884587152
- type: nauc_map_at_20_max
value: -12.580961418657783
- type: nauc_map_at_20_std
value: -12.626994998566007
- type: nauc_map_at_3_diff1
value: 11.021077829815507
- type: nauc_map_at_3_max
value: -13.20022886911152
- type: nauc_map_at_3_std
value: -13.127711855412471
- type: nauc_map_at_5_diff1
value: 11.138694322935278
- type: nauc_map_at_5_max
value: -12.748146823323433
- type: nauc_map_at_5_std
value: -13.183789787796002
- type: nauc_mrr_at_1000_diff1
value: 9.677867008889587
- type: nauc_mrr_at_1000_max
value: -13.420330905625857
- type: nauc_mrr_at_1000_std
value: -12.792519437553008
- type: nauc_mrr_at_100_diff1
value: 9.680107626011944
- type: nauc_mrr_at_100_max
value: -13.411410836965254
- type: nauc_mrr_at_100_std
value: -12.788644939208261
- type: nauc_mrr_at_10_diff1
value: 9.589680890065521
- type: nauc_mrr_at_10_max
value: -13.261739941834202
- type: nauc_mrr_at_10_std
value: -12.944134710141187
- type: nauc_mrr_at_1_diff1
value: 12.085031779160564
- type: nauc_mrr_at_1_max
value: -15.02002211766975
- type: nauc_mrr_at_1_std
value: -13.355756268733016
- type: nauc_mrr_at_20_diff1
value: 9.677873154739816
- type: nauc_mrr_at_20_max
value: -13.300790622622587
- type: nauc_mrr_at_20_std
value: -12.707185337847148
- type: nauc_mrr_at_3_diff1
value: 9.472988614112802
- type: nauc_mrr_at_3_max
value: -13.919505060412762
- type: nauc_mrr_at_3_std
value: -13.164277574722277
- type: nauc_mrr_at_5_diff1
value: 9.467059127457365
- type: nauc_mrr_at_5_max
value: -13.584824274866206
- type: nauc_mrr_at_5_std
value: -13.199173673034172
- type: nauc_ndcg_at_1000_diff1
value: 11.117383537119457
- type: nauc_ndcg_at_1000_max
value: -12.047108406166398
- type: nauc_ndcg_at_1000_std
value: -12.4255053792295
- type: nauc_ndcg_at_100_diff1
value: 11.199092599092824
- type: nauc_ndcg_at_100_max
value: -11.816562361312737
- type: nauc_ndcg_at_100_std
value: -12.321599738274934
- type: nauc_ndcg_at_10_diff1
value: 10.619688096042301
- type: nauc_ndcg_at_10_max
value: -10.991140718309158
- type: nauc_ndcg_at_10_std
value: -12.913717053782964
- type: nauc_ndcg_at_1_diff1
value: 13.545199807116537
- type: nauc_ndcg_at_1_max
value: -15.05694303234355
- type: nauc_ndcg_at_1_std
value: -13.135999468701948
- type: nauc_ndcg_at_20_diff1
value: 11.079239059115043
- type: nauc_ndcg_at_20_max
value: -11.107522795986476
- type: nauc_ndcg_at_20_std
value: -11.917269092652596
- type: nauc_ndcg_at_3_diff1
value: 10.328082482022936
- type: nauc_ndcg_at_3_max
value: -12.609971276627075
- type: nauc_ndcg_at_3_std
value: -13.581875503621793
- type: nauc_ndcg_at_5_diff1
value: 10.598034768408395
- type: nauc_ndcg_at_5_max
value: -11.664284036838387
- type: nauc_ndcg_at_5_std
value: -13.738318585447246
- type: nauc_precision_at_1000_diff1
value: 3.733355117431035
- type: nauc_precision_at_1000_max
value: 22.126811641224737
- type: nauc_precision_at_1000_std
value: 77.22610895194498
- type: nauc_precision_at_100_diff1
value: 27.682371417569136
- type: nauc_precision_at_100_max
value: 55.30719621706036
- type: nauc_precision_at_100_std
value: 51.87386775498134
- type: nauc_precision_at_10_diff1
value: 7.322656348885176
- type: nauc_precision_at_10_max
value: 0.2704135680738493
- type: nauc_precision_at_10_std
value: -12.841217202927321
- type: nauc_precision_at_1_diff1
value: 13.545199807116537
- type: nauc_precision_at_1_max
value: -15.05694303234355
- type: nauc_precision_at_1_std
value: -13.135999468701948
- type: nauc_precision_at_20_diff1
value: 10.486079260481048
- type: nauc_precision_at_20_max
value: 14.003109613986817
- type: nauc_precision_at_20_std
value: 4.910816164725959
- type: nauc_precision_at_3_diff1
value: 8.271896718206264
- type: nauc_precision_at_3_max
value: -10.827383320727357
- type: nauc_precision_at_3_std
value: -15.106532989878312
- type: nauc_precision_at_5_diff1
value: 8.834654894956898
- type: nauc_precision_at_5_max
value: -7.540039352361894
- type: nauc_precision_at_5_std
value: -15.969132098353741
- type: nauc_recall_at_1000_diff1
value: 3.733355117431255
- type: nauc_recall_at_1000_max
value: 22.126811641217202
- type: nauc_recall_at_1000_std
value: 77.22610895193765
- type: nauc_recall_at_100_diff1
value: 27.682371417566458
- type: nauc_recall_at_100_max
value: 55.30719621705814
- type: nauc_recall_at_100_std
value: 51.8738677549813
- type: nauc_recall_at_10_diff1
value: 7.322656348885266
- type: nauc_recall_at_10_max
value: 0.27041356807404016
- type: nauc_recall_at_10_std
value: -12.841217202927096
- type: nauc_recall_at_1_diff1
value: 13.545199807116537
- type: nauc_recall_at_1_max
value: -15.05694303234355
- type: nauc_recall_at_1_std
value: -13.135999468701948
- type: nauc_recall_at_20_diff1
value: 10.486079260481167
- type: nauc_recall_at_20_max
value: 14.003109613986972
- type: nauc_recall_at_20_std
value: 4.910816164726593
- type: nauc_recall_at_3_diff1
value: 8.271896718206312
- type: nauc_recall_at_3_max
value: -10.827383320727314
- type: nauc_recall_at_3_std
value: -15.106532989878287
- type: nauc_recall_at_5_diff1
value: 8.834654894956909
- type: nauc_recall_at_5_max
value: -7.540039352361923
- type: nauc_recall_at_5_std
value: -15.969132098353715
- type: ndcg_at_1
value: 31.436999999999998
- type: ndcg_at_10
value: 56.713
- type: ndcg_at_100
value: 59.887
- type: ndcg_at_1000
value: 59.94500000000001
- type: ndcg_at_20
value: 58.98
- type: ndcg_at_3
value: 46.261
- type: ndcg_at_5
value: 51.501
- type: precision_at_1
value: 31.436999999999998
- type: precision_at_10
value: 8.578
- type: precision_at_100
value: 0.992
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.73
- type: precision_at_3
value: 18.990000000000002
- type: precision_at_5
value: 13.94
- type: recall_at_1
value: 31.436999999999998
- type: recall_at_10
value: 85.775
- type: recall_at_100
value: 99.21799999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_20
value: 94.595
- type: recall_at_3
value: 56.97
- type: recall_at_5
value: 69.70100000000001
task:
type: Retrieval
- dataset:
config: default
name: MTEB ArxivClusteringP2P
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
split: test
type: mteb/arxiv-clustering-p2p
metrics:
- type: main_score
value: 47.077382303485514
- type: v_measure
value: 47.077382303485514
- type: v_measure_std
value: 14.00039477846898
task:
type: Clustering
- dataset:
config: default
name: MTEB ArxivClusteringS2S
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
split: test
type: mteb/arxiv-clustering-s2s
metrics:
- type: main_score
value: 39.11589804504639
- type: v_measure
value: 39.11589804504639
- type: v_measure_std
value: 14.697039096668583
task:
type: Clustering
- dataset:
config: default
name: MTEB AskUbuntuDupQuestions
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
split: test
type: mteb/askubuntudupquestions-reranking
metrics:
- type: main_score
value: 60.01096720382656
- type: map
value: 60.01096720382656
- type: mrr
value: 74.4235588972431
- type: nAUC_map_diff1
value: 14.296647950054817
- type: nAUC_map_max
value: 21.720215707737303
- type: nAUC_map_std
value: 18.20845510591147
- type: nAUC_mrr_diff1
value: 23.769639422872142
- type: nAUC_mrr_max
value: 33.07785201075024
- type: nAUC_mrr_std
value: 18.461570711690968
task:
type: Reranking
- dataset:
config: default
name: MTEB BIOSSES
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
split: test
type: mteb/biosses-sts
metrics:
- type: cosine_pearson
value: 87.60987223075549
- type: cosine_spearman
value: 86.23750714877664
- type: euclidean_pearson
value: 86.21541799525612
- type: euclidean_spearman
value: 86.23750714877664
- type: main_score
value: 86.23750714877664
- type: manhattan_pearson
value: 86.1758097383748
- type: manhattan_spearman
value: 86.37365482930716
- type: pearson
value: 87.60987223075549
- type: spearman
value: 86.23750714877664
task:
type: STS
- dataset:
config: default
name: MTEB Banking77Classification
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
split: test
type: mteb/banking77
metrics:
- type: accuracy
value: 79.16883116883118
- type: f1
value: 78.34840435712427
- type: f1_weighted
value: 78.3484043571243
- type: main_score
value: 79.16883116883118
task:
type: Classification
- dataset:
config: default
name: MTEB BiorxivClusteringP2P
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
split: test
type: mteb/biorxiv-clustering-p2p
metrics:
- type: main_score
value: 39.29881417268574
- type: v_measure
value: 39.29881417268574
- type: v_measure_std
value: 1.1874002185778423
task:
type: Clustering
- dataset:
config: default
name: MTEB BiorxivClusteringS2S
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
split: test
type: mteb/biorxiv-clustering-s2s
metrics:
- type: main_score
value: 33.9614529554878
- type: v_measure
value: 33.9614529554878
- type: v_measure_std
value: 0.6283058974037568
task:
type: Clustering
- dataset:
config: default
name: MTEB CQADupstackAndroidRetrieval
revision: f46a197baaae43b4f621051089b82a364682dfeb
split: test
type: mteb/cqadupstack-android
metrics:
- type: main_score
value: 51.891
- type: map_at_1
value: 33.335
- type: map_at_10
value: 45.206
- type: map_at_100
value: 46.794000000000004
- type: map_at_1000
value: 46.910000000000004
- type: map_at_20
value: 46.107
- type: map_at_3
value: 41.478
- type: map_at_5
value: 43.491
- type: mrr_at_1
value: 40.2002861230329
- type: mrr_at_10
value: 51.27449644617026
- type: mrr_at_100
value: 51.94262681998448
- type: mrr_at_1000
value: 51.98748435659779
- type: mrr_at_20
value: 51.679253979427365
- type: mrr_at_3
value: 48.545541249403904
- type: mrr_at_5
value: 50.26943252265138
- type: nauc_map_at_1000_diff1
value: 53.279892622864466
- type: nauc_map_at_1000_max
value: 37.30026325175372
- type: nauc_map_at_1000_std
value: -5.31272778840401
- type: nauc_map_at_100_diff1
value: 53.260255242354035
- type: nauc_map_at_100_max
value: 37.34138849578408
- type: nauc_map_at_100_std
value: -5.223853769998806
- type: nauc_map_at_10_diff1
value: 53.01168904143889
- type: nauc_map_at_10_max
value: 36.52985848709173
- type: nauc_map_at_10_std
value: -6.60737122397934
- type: nauc_map_at_1_diff1
value: 57.48774969135532
- type: nauc_map_at_1_max
value: 32.87239964104006
- type: nauc_map_at_1_std
value: -9.65950934039381
- type: nauc_map_at_20_diff1
value: 53.014218960477145
- type: nauc_map_at_20_max
value: 36.95460780612761
- type: nauc_map_at_20_std
value: -5.7846033314898975
- type: nauc_map_at_3_diff1
value: 53.386035964079085
- type: nauc_map_at_3_max
value: 35.494196154327376
- type: nauc_map_at_3_std
value: -7.761241655463379
- type: nauc_map_at_5_diff1
value: 52.52045589069632
- type: nauc_map_at_5_max
value: 35.87189518536011
- type: nauc_map_at_5_std
value: -7.280825988785475
- type: nauc_mrr_at_1000_diff1
value: 52.21043432899831
- type: nauc_mrr_at_1000_max
value: 37.52636619273335
- type: nauc_mrr_at_1000_std
value: -5.458572482733526
- type: nauc_mrr_at_100_diff1
value: 52.19543099780388
- type: nauc_mrr_at_100_max
value: 37.528593941814115
- type: nauc_mrr_at_100_std
value: -5.434274045688043
- type: nauc_mrr_at_10_diff1
value: 51.89698285990516
- type: nauc_mrr_at_10_max
value: 37.444484137976744
- type: nauc_mrr_at_10_std
value: -5.682595266827838
- type: nauc_mrr_at_1_diff1
value: 56.17142686081959
- type: nauc_mrr_at_1_max
value: 36.815076888109125
- type: nauc_mrr_at_1_std
value: -9.1961282634956
- type: nauc_mrr_at_20_diff1
value: 52.13365466798001
- type: nauc_mrr_at_20_max
value: 37.47508491548877
- type: nauc_mrr_at_20_std
value: -5.38723388397372
- type: nauc_mrr_at_3_diff1
value: 52.261215410063635
- type: nauc_mrr_at_3_max
value: 38.06288987541818
- type: nauc_mrr_at_3_std
value: -6.3586931672947555
- type: nauc_mrr_at_5_diff1
value: 51.361626281443954
- type: nauc_mrr_at_5_max
value: 37.21931557944178
- type: nauc_mrr_at_5_std
value: -6.2463983922879125
- type: nauc_ndcg_at_1000_diff1
value: 52.302043350366354
- type: nauc_ndcg_at_1000_max
value: 38.20021133882071
- type: nauc_ndcg_at_1000_std
value: -2.4092846074901835
- type: nauc_ndcg_at_100_diff1
value: 52.08002602041293
- type: nauc_ndcg_at_100_max
value: 38.59011692167586
- type: nauc_ndcg_at_100_std
value: -1.1028958529707618
- type: nauc_ndcg_at_10_diff1
value: 50.96919959110156
- type: nauc_ndcg_at_10_max
value: 37.27781873450064
- type: nauc_ndcg_at_10_std
value: -4.275751021315601
- type: nauc_ndcg_at_1_diff1
value: 56.17142686081959
- type: nauc_ndcg_at_1_max
value: 36.815076888109125
- type: nauc_ndcg_at_1_std
value: -9.1961282634956
- type: nauc_ndcg_at_20_diff1
value: 51.18802925052476
- type: nauc_ndcg_at_20_max
value: 37.37541430996012
- type: nauc_ndcg_at_20_std
value: -2.535809483675881
- type: nauc_ndcg_at_3_diff1
value: 51.55692622850066
- type: nauc_ndcg_at_3_max
value: 38.161090909217535
- type: nauc_ndcg_at_3_std
value: -5.451913542383229
- type: nauc_ndcg_at_5_diff1
value: 49.79865041898466
- type: nauc_ndcg_at_5_max
value: 37.05367743749936
- type: nauc_ndcg_at_5_std
value: -5.333995413688977
- type: nauc_precision_at_1000_diff1
value: -9.765182693652369
- type: nauc_precision_at_1000_max
value: -6.187402469203501
- type: nauc_precision_at_1000_std
value: -1.6165299667925566
- type: nauc_precision_at_100_diff1
value: -3.3699636809298488
- type: nauc_precision_at_100_max
value: 10.763143757354227
- type: nauc_precision_at_100_std
value: 14.6134300235666
- type: nauc_precision_at_10_diff1
value: 12.380848989838922
- type: nauc_precision_at_10_max
value: 27.814295948898703
- type: nauc_precision_at_10_std
value: 9.281809355379423
- type: nauc_precision_at_1_diff1
value: 56.17142686081959
- type: nauc_precision_at_1_max
value: 36.815076888109125
- type: nauc_precision_at_1_std
value: -9.1961282634956
- type: nauc_precision_at_20_diff1
value: 5.172974864217038
- type: nauc_precision_at_20_max
value: 21.610380863767407
- type: nauc_precision_at_20_std
value: 14.897216777831563
- type: nauc_precision_at_3_diff1
value: 32.62574902686228
- type: nauc_precision_at_3_max
value: 38.23786681054578
- type: nauc_precision_at_3_std
value: 1.5049286474387453
- type: nauc_precision_at_5_diff1
value: 20.157338510243537
- type: nauc_precision_at_5_max
value: 33.504499592506924
- type: nauc_precision_at_5_std
value: 5.128885224590291
- type: nauc_recall_at_1000_diff1
value: 52.32430518946571
- type: nauc_recall_at_1000_max
value: 56.03264454563954
- type: nauc_recall_at_1000_std
value: 59.06408303625301
- type: nauc_recall_at_100_diff1
value: 44.41661317138834
- type: nauc_recall_at_100_max
value: 43.511654367641746
- type: nauc_recall_at_100_std
value: 28.435889217482348
- type: nauc_recall_at_10_diff1
value: 41.091326330340564
- type: nauc_recall_at_10_max
value: 32.634495610887825
- type: nauc_recall_at_10_std
value: 0.4940136136777342
- type: nauc_recall_at_1_diff1
value: 57.48774969135532
- type: nauc_recall_at_1_max
value: 32.87239964104006
- type: nauc_recall_at_1_std
value: -9.65950934039381
- type: nauc_recall_at_20_diff1
value: 40.31827375470033
- type: nauc_recall_at_20_max
value: 32.29591796577925
- type: nauc_recall_at_20_std
value: 9.003204772501102
- type: nauc_recall_at_3_diff1
value: 45.516327838347145
- type: nauc_recall_at_3_max
value: 34.64131339427055
- type: nauc_recall_at_3_std
value: -4.883112425443149
- type: nauc_recall_at_5_diff1
value: 40.04821220854672
- type: nauc_recall_at_5_max
value: 31.778912319343245
- type: nauc_recall_at_5_std
value: -3.7415628516202455
- type: ndcg_at_1
value: 40.2
- type: ndcg_at_10
value: 51.891
- type: ndcg_at_100
value: 57.176
- type: ndcg_at_1000
value: 58.923
- type: ndcg_at_20
value: 54.069
- type: ndcg_at_3
value: 46.598
- type: ndcg_at_5
value: 49.09
- type: precision_at_1
value: 40.2
- type: precision_at_10
value: 9.914000000000001
- type: precision_at_100
value: 1.567
- type: precision_at_1000
value: 0.201
- type: precision_at_20
value: 5.88
- type: precision_at_3
value: 22.413
- type: precision_at_5
value: 16.166
- type: recall_at_1
value: 33.335
- type: recall_at_10
value: 64.551
- type: recall_at_100
value: 85.821
- type: recall_at_1000
value: 96.762
- type: recall_at_20
value: 72.174
- type: recall_at_3
value: 49.486000000000004
- type: recall_at_5
value: 56.333
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackEnglishRetrieval
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
split: test
type: mteb/cqadupstack-english
metrics:
- type: main_score
value: 47.743
- type: map_at_1
value: 30.749
- type: map_at_10
value: 41.893
- type: map_at_100
value: 43.074
- type: map_at_1000
value: 43.206
- type: map_at_20
value: 42.484
- type: map_at_3
value: 38.832
- type: map_at_5
value: 40.56
- type: mrr_at_1
value: 38.47133757961784
- type: mrr_at_10
value: 47.47879385299764
- type: mrr_at_100
value: 48.13041682690096
- type: mrr_at_1000
value: 48.16908094151714
- type: mrr_at_20
value: 47.83975520310091
- type: mrr_at_3
value: 45.24416135881104
- type: mrr_at_5
value: 46.4575371549894
- type: nauc_map_at_1000_diff1
value: 53.06462034979563
- type: nauc_map_at_1000_max
value: 40.432105687788656
- type: nauc_map_at_1000_std
value: 0.8039549983504692
- type: nauc_map_at_100_diff1
value: 53.05370086178664
- type: nauc_map_at_100_max
value: 40.35039423002031
- type: nauc_map_at_100_std
value: 0.6926327616039866
- type: nauc_map_at_10_diff1
value: 53.1830045138059
- type: nauc_map_at_10_max
value: 39.627286670538595
- type: nauc_map_at_10_std
value: -0.22464993353878815
- type: nauc_map_at_1_diff1
value: 56.871781522537766
- type: nauc_map_at_1_max
value: 32.96704680744524
- type: nauc_map_at_1_std
value: -5.921602493857661
- type: nauc_map_at_20_diff1
value: 53.145249746486044
- type: nauc_map_at_20_max
value: 40.01420443810482
- type: nauc_map_at_20_std
value: 0.08024012298451409
- type: nauc_map_at_3_diff1
value: 53.61256390241628
- type: nauc_map_at_3_max
value: 37.718761042447355
- type: nauc_map_at_3_std
value: -3.1494217572705643
- type: nauc_map_at_5_diff1
value: 53.42451370773802
- type: nauc_map_at_5_max
value: 39.10211508999835
- type: nauc_map_at_5_std
value: -1.3726005124064382
- type: nauc_mrr_at_1000_diff1
value: 52.366327228586826
- type: nauc_mrr_at_1000_max
value: 42.79408822085321
- type: nauc_mrr_at_1000_std
value: 5.269519433666342
- type: nauc_mrr_at_100_diff1
value: 52.35603052240957
- type: nauc_mrr_at_100_max
value: 42.79000481880218
- type: nauc_mrr_at_100_std
value: 5.2750737033839
- type: nauc_mrr_at_10_diff1
value: 52.39562273635053
- type: nauc_mrr_at_10_max
value: 42.89003586620541
- type: nauc_mrr_at_10_std
value: 5.271670669960424
- type: nauc_mrr_at_1_diff1
value: 55.23898880710424
- type: nauc_mrr_at_1_max
value: 40.54533981737213
- type: nauc_mrr_at_1_std
value: 2.8970042155061764
- type: nauc_mrr_at_20_diff1
value: 52.37981625369539
- type: nauc_mrr_at_20_max
value: 42.84997042876778
- type: nauc_mrr_at_20_std
value: 5.227463826093572
- type: nauc_mrr_at_3_diff1
value: 52.72571788614424
- type: nauc_mrr_at_3_max
value: 42.345870917325726
- type: nauc_mrr_at_3_std
value: 3.299097645280945
- type: nauc_mrr_at_5_diff1
value: 52.62188834616699
- type: nauc_mrr_at_5_max
value: 42.903468515894396
- type: nauc_mrr_at_5_std
value: 4.747245788723795
- type: nauc_ndcg_at_1000_diff1
value: 51.35755860941204
- type: nauc_ndcg_at_1000_max
value: 42.52609999052394
- type: nauc_ndcg_at_1000_std
value: 5.642311193436153
- type: nauc_ndcg_at_100_diff1
value: 51.28342511372341
- type: nauc_ndcg_at_100_max
value: 42.37095542860874
- type: nauc_ndcg_at_100_std
value: 5.438433970975347
- type: nauc_ndcg_at_10_diff1
value: 51.71963256563276
- type: nauc_ndcg_at_10_max
value: 42.02346709779174
- type: nauc_ndcg_at_10_std
value: 3.824062263424335
- type: nauc_ndcg_at_1_diff1
value: 55.23898880710424
- type: nauc_ndcg_at_1_max
value: 40.54533981737213
- type: nauc_ndcg_at_1_std
value: 2.8970042155061764
- type: nauc_ndcg_at_20_diff1
value: 51.62634477715352
- type: nauc_ndcg_at_20_max
value: 42.29963927857424
- type: nauc_ndcg_at_20_std
value: 3.9028710206367236
- type: nauc_ndcg_at_3_diff1
value: 52.222449202755016
- type: nauc_ndcg_at_3_max
value: 41.46992245846295
- type: nauc_ndcg_at_3_std
value: 1.0823436332685996
- type: nauc_ndcg_at_5_diff1
value: 52.16212705304167
- type: nauc_ndcg_at_5_max
value: 42.13209332939894
- type: nauc_ndcg_at_5_std
value: 2.4542588912655274
- type: nauc_precision_at_1000_diff1
value: -8.401668509217943
- type: nauc_precision_at_1000_max
value: 15.032825183812085
- type: nauc_precision_at_1000_std
value: 26.43305637512703
- type: nauc_precision_at_100_diff1
value: -1.8634808652246229
- type: nauc_precision_at_100_max
value: 25.81140765391014
- type: nauc_precision_at_100_std
value: 30.416905158069866
- type: nauc_precision_at_10_diff1
value: 17.41557757307102
- type: nauc_precision_at_10_max
value: 39.14885850946607
- type: nauc_precision_at_10_std
value: 24.95280377881581
- type: nauc_precision_at_1_diff1
value: 55.23898880710424
- type: nauc_precision_at_1_max
value: 40.54533981737213
- type: nauc_precision_at_1_std
value: 2.8970042155061764
- type: nauc_precision_at_20_diff1
value: 10.062640125327128
- type: nauc_precision_at_20_max
value: 35.045402951191846
- type: nauc_precision_at_20_std
value: 25.70168197296463
- type: nauc_precision_at_3_diff1
value: 33.46362110931572
- type: nauc_precision_at_3_max
value: 41.412992322808925
- type: nauc_precision_at_3_std
value: 11.979383703068118
- type: nauc_precision_at_5_diff1
value: 26.683507518187668
- type: nauc_precision_at_5_max
value: 41.72280139069927
- type: nauc_precision_at_5_std
value: 19.17798438251631
- type: nauc_recall_at_1000_diff1
value: 38.735635750923215
- type: nauc_recall_at_1000_max
value: 44.86473643316888
- type: nauc_recall_at_1000_std
value: 31.25373100446453
- type: nauc_recall_at_100_diff1
value: 40.57017590339941
- type: nauc_recall_at_100_max
value: 41.58935193499359
- type: nauc_recall_at_100_std
value: 19.64130480064006
- type: nauc_recall_at_10_diff1
value: 45.17360514460368
- type: nauc_recall_at_10_max
value: 40.261115967269255
- type: nauc_recall_at_10_std
value: 7.455967519438798
- type: nauc_recall_at_1_diff1
value: 56.871781522537766
- type: nauc_recall_at_1_max
value: 32.96704680744524
- type: nauc_recall_at_1_std
value: -5.921602493857661
- type: nauc_recall_at_20_diff1
value: 43.72345233115324
- type: nauc_recall_at_20_max
value: 41.57606589762751
- type: nauc_recall_at_20_std
value: 8.691613720578838
- type: nauc_recall_at_3_diff1
value: 49.05085474723903
- type: nauc_recall_at_3_max
value: 37.76677336796684
- type: nauc_recall_at_3_std
value: -2.60155821559317
- type: nauc_recall_at_5_diff1
value: 47.93530083560441
- type: nauc_recall_at_5_max
value: 40.34510386143269
- type: nauc_recall_at_5_std
value: 2.490510815950763
- type: ndcg_at_1
value: 38.471
- type: ndcg_at_10
value: 47.743
- type: ndcg_at_100
value: 52.105999999999995
- type: ndcg_at_1000
value: 54.047
- type: ndcg_at_20
value: 49.277
- type: ndcg_at_3
value: 43.423
- type: ndcg_at_5
value: 45.308
- type: precision_at_1
value: 38.471
- type: precision_at_10
value: 8.936
- type: precision_at_100
value: 1.439
- type: precision_at_1000
value: 0.191
- type: precision_at_20
value: 5.197
- type: precision_at_3
value: 21.21
- type: precision_at_5
value: 14.764
- type: recall_at_1
value: 30.749
- type: recall_at_10
value: 58.769000000000005
- type: recall_at_100
value: 77.12599999999999
- type: recall_at_1000
value: 89.131
- type: recall_at_20
value: 64.23299999999999
- type: recall_at_3
value: 45.722
- type: recall_at_5
value: 51.434999999999995
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackGamingRetrieval
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
split: test
type: mteb/cqadupstack-gaming
metrics:
- type: main_score
value: 61.763999999999996
- type: map_at_1
value: 41.738
- type: map_at_10
value: 55.54900000000001
- type: map_at_100
value: 56.595
- type: map_at_1000
value: 56.641
- type: map_at_20
value: 56.211
- type: map_at_3
value: 52.11900000000001
- type: map_at_5
value: 54.11
- type: mrr_at_1
value: 47.460815047021946
- type: mrr_at_10
value: 58.77068716723895
- type: mrr_at_100
value: 59.38209751192344
- type: mrr_at_1000
value: 59.40317589090272
- type: mrr_at_20
value: 59.18129234953538
- type: mrr_at_3
value: 56.269592476489095
- type: mrr_at_5
value: 57.708463949843356
- type: nauc_map_at_1000_diff1
value: 51.887217799463734
- type: nauc_map_at_1000_max
value: 38.476238579220265
- type: nauc_map_at_1000_std
value: -8.909798628947804
- type: nauc_map_at_100_diff1
value: 51.89673571830934
- type: nauc_map_at_100_max
value: 38.49528851775263
- type: nauc_map_at_100_std
value: -8.889935720271557
- type: nauc_map_at_10_diff1
value: 51.91349178068071
- type: nauc_map_at_10_max
value: 38.245010697659836
- type: nauc_map_at_10_std
value: -9.52932907514524
- type: nauc_map_at_1_diff1
value: 55.367152126889216
- type: nauc_map_at_1_max
value: 31.488529193663776
- type: nauc_map_at_1_std
value: -11.70055580794173
- type: nauc_map_at_20_diff1
value: 51.85824325926638
- type: nauc_map_at_20_max
value: 38.46667850988723
- type: nauc_map_at_20_std
value: -9.073982957469298
- type: nauc_map_at_3_diff1
value: 52.453646927521646
- type: nauc_map_at_3_max
value: 37.17158366121139
- type: nauc_map_at_3_std
value: -11.075317328080358
- type: nauc_map_at_5_diff1
value: 52.18170093862806
- type: nauc_map_at_5_max
value: 37.87875768077388
- type: nauc_map_at_5_std
value: -10.419858401874496
- type: nauc_mrr_at_1000_diff1
value: 50.893763535986395
- type: nauc_mrr_at_1000_max
value: 38.27283318452696
- type: nauc_mrr_at_1000_std
value: -8.965768039001496
- type: nauc_mrr_at_100_diff1
value: 50.89248813810169
- type: nauc_mrr_at_100_max
value: 38.28950132255245
- type: nauc_mrr_at_100_std
value: -8.95128100093488
- type: nauc_mrr_at_10_diff1
value: 50.77022223657664
- type: nauc_mrr_at_10_max
value: 38.375655546871265
- type: nauc_mrr_at_10_std
value: -9.095822436312883
- type: nauc_mrr_at_1_diff1
value: 54.273269231030376
- type: nauc_mrr_at_1_max
value: 35.215199363709694
- type: nauc_mrr_at_1_std
value: -11.475700374314476
- type: nauc_mrr_at_20_diff1
value: 50.81456113949372
- type: nauc_mrr_at_20_max
value: 38.302175737552055
- type: nauc_mrr_at_20_std
value: -8.934574273523289
- type: nauc_mrr_at_3_diff1
value: 50.78862027858185
- type: nauc_mrr_at_3_max
value: 37.897265642308774
- type: nauc_mrr_at_3_std
value: -9.7051681225179
- type: nauc_mrr_at_5_diff1
value: 50.90492316147762
- type: nauc_mrr_at_5_max
value: 38.53722687374221
- type: nauc_mrr_at_5_std
value: -9.299890938504227
- type: nauc_ndcg_at_1000_diff1
value: 50.73638139548288
- type: nauc_ndcg_at_1000_max
value: 39.85802557514683
- type: nauc_ndcg_at_1000_std
value: -6.70113183960232
- type: nauc_ndcg_at_100_diff1
value: 50.779535406638765
- type: nauc_ndcg_at_100_max
value: 40.394251354245036
- type: nauc_ndcg_at_100_std
value: -6.17206367606794
- type: nauc_ndcg_at_10_diff1
value: 50.303282528711016
- type: nauc_ndcg_at_10_max
value: 40.231987371813275
- type: nauc_ndcg_at_10_std
value: -7.639018988100839
- type: nauc_ndcg_at_1_diff1
value: 54.273269231030376
- type: nauc_ndcg_at_1_max
value: 35.215199363709694
- type: nauc_ndcg_at_1_std
value: -11.475700374314476
- type: nauc_ndcg_at_20_diff1
value: 50.356050127103714
- type: nauc_ndcg_at_20_max
value: 40.55568084242222
- type: nauc_ndcg_at_20_std
value: -6.483107726038491
- type: nauc_ndcg_at_3_diff1
value: 51.05296014104886
- type: nauc_ndcg_at_3_max
value: 38.43234794308373
- type: nauc_ndcg_at_3_std
value: -10.439005270644946
- type: nauc_ndcg_at_5_diff1
value: 50.910744514124396
- type: nauc_ndcg_at_5_max
value: 39.65997793063013
- type: nauc_ndcg_at_5_std
value: -9.301232437151493
- type: nauc_precision_at_1000_diff1
value: -20.181933493165733
- type: nauc_precision_at_1000_max
value: 2.578307678316095
- type: nauc_precision_at_1000_std
value: 15.686799365012833
- type: nauc_precision_at_100_diff1
value: -13.795727875316347
- type: nauc_precision_at_100_max
value: 9.709062354686774
- type: nauc_precision_at_100_std
value: 18.961613263814677
- type: nauc_precision_at_10_diff1
value: 7.40872143060594
- type: nauc_precision_at_10_max
value: 26.809993041042556
- type: nauc_precision_at_10_std
value: 10.236067383032058
- type: nauc_precision_at_1_diff1
value: 54.273269231030376
- type: nauc_precision_at_1_max
value: 35.215199363709694
- type: nauc_precision_at_1_std
value: -11.475700374314476
- type: nauc_precision_at_20_diff1
value: -1.688941886501611
- type: nauc_precision_at_20_max
value: 21.268201038992522
- type: nauc_precision_at_20_std
value: 16.07376773498563
- type: nauc_precision_at_3_diff1
value: 28.74741840390366
- type: nauc_precision_at_3_max
value: 35.76072260864896
- type: nauc_precision_at_3_std
value: -3.417692124530744
- type: nauc_precision_at_5_diff1
value: 19.548619556271156
- type: nauc_precision_at_5_max
value: 31.886919665943346
- type: nauc_precision_at_5_std
value: 1.862934756145585
- type: nauc_recall_at_1000_diff1
value: 31.041694793670338
- type: nauc_recall_at_1000_max
value: 63.91892534071412
- type: nauc_recall_at_1000_std
value: 69.14154944882482
- type: nauc_recall_at_100_diff1
value: 43.49542559947028
- type: nauc_recall_at_100_max
value: 56.03185734090638
- type: nauc_recall_at_100_std
value: 22.095792306102354
- type: nauc_recall_at_10_diff1
value: 43.14512549298462
- type: nauc_recall_at_10_max
value: 45.22069238009228
- type: nauc_recall_at_10_std
value: -1.2112961961367767
- type: nauc_recall_at_1_diff1
value: 55.367152126889216
- type: nauc_recall_at_1_max
value: 31.488529193663776
- type: nauc_recall_at_1_std
value: -11.70055580794173
- type: nauc_recall_at_20_diff1
value: 41.80793189392197
- type: nauc_recall_at_20_max
value: 48.68496142311243
- type: nauc_recall_at_20_std
value: 7.150814199044829
- type: nauc_recall_at_3_diff1
value: 47.569484872499665
- type: nauc_recall_at_3_max
value: 39.60379791030235
- type: nauc_recall_at_3_std
value: -9.958304202022761
- type: nauc_recall_at_5_diff1
value: 46.3357445159555
- type: nauc_recall_at_5_max
value: 42.69508638941086
- type: nauc_recall_at_5_std
value: -6.991079788988482
- type: ndcg_at_1
value: 47.461
- type: ndcg_at_10
value: 61.763999999999996
- type: ndcg_at_100
value: 65.613
- type: ndcg_at_1000
value: 66.435
- type: ndcg_at_20
value: 63.577
- type: ndcg_at_3
value: 56.119
- type: ndcg_at_5
value: 58.897
- type: precision_at_1
value: 47.461
- type: precision_at_10
value: 9.925
- type: precision_at_100
value: 1.283
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_20
value: 5.542
- type: precision_at_3
value: 25.119999999999997
- type: precision_at_5
value: 17.204
- type: recall_at_1
value: 41.738
- type: recall_at_10
value: 76.78399999999999
- type: recall_at_100
value: 92.917
- type: recall_at_1000
value: 98.63499999999999
- type: recall_at_20
value: 83.313
- type: recall_at_3
value: 61.803
- type: recall_at_5
value: 68.49199999999999
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackGisRetrieval
revision: 5003b3064772da1887988e05400cf3806fe491f2
split: test
type: mteb/cqadupstack-gis
metrics:
- type: main_score
value: 37.997
- type: map_at_1
value: 24.316
- type: map_at_10
value: 32.673
- type: map_at_100
value: 33.757
- type: map_at_1000
value: 33.839999999999996
- type: map_at_20
value: 33.289
- type: map_at_3
value: 29.705
- type: map_at_5
value: 31.258999999999997
- type: mrr_at_1
value: 26.55367231638418
- type: mrr_at_10
value: 34.95045287418165
- type: mrr_at_100
value: 35.88860620376054
- type: mrr_at_1000
value: 35.94690680526854
- type: mrr_at_20
value: 35.51689167162481
- type: mrr_at_3
value: 32.090395480226
- type: mrr_at_5
value: 33.59887005649716
- type: nauc_map_at_1000_diff1
value: 40.05626085462073
- type: nauc_map_at_1000_max
value: 27.805616301644108
- type: nauc_map_at_1000_std
value: 2.70246695251992
- type: nauc_map_at_100_diff1
value: 40.059278877458546
- type: nauc_map_at_100_max
value: 27.77648271649888
- type: nauc_map_at_100_std
value: 2.722441305955515
- type: nauc_map_at_10_diff1
value: 40.31968856988776
- type: nauc_map_at_10_max
value: 27.476489831549973
- type: nauc_map_at_10_std
value: 2.317366284056495
- type: nauc_map_at_1_diff1
value: 44.48148871072693
- type: nauc_map_at_1_max
value: 28.919146703924675
- type: nauc_map_at_1_std
value: -0.1434376879249071
- type: nauc_map_at_20_diff1
value: 40.06730497906938
- type: nauc_map_at_20_max
value: 27.668823515524004
- type: nauc_map_at_20_std
value: 2.493103019008483
- type: nauc_map_at_3_diff1
value: 41.12772700221662
- type: nauc_map_at_3_max
value: 27.174803787199824
- type: nauc_map_at_3_std
value: -0.10118635015762467
- type: nauc_map_at_5_diff1
value: 40.77458823783091
- type: nauc_map_at_5_max
value: 27.080426477470642
- type: nauc_map_at_5_std
value: 1.485466402750173
- type: nauc_mrr_at_1000_diff1
value: 38.312224992745385
- type: nauc_mrr_at_1000_max
value: 28.950414700386702
- type: nauc_mrr_at_1000_std
value: 4.633122302505108
- type: nauc_mrr_at_100_diff1
value: 38.293568602643354
- type: nauc_mrr_at_100_max
value: 28.935077067979293
- type: nauc_mrr_at_100_std
value: 4.6507547334542005
- type: nauc_mrr_at_10_diff1
value: 38.43539906942557
- type: nauc_mrr_at_10_max
value: 28.740524868553607
- type: nauc_mrr_at_10_std
value: 4.465395711794246
- type: nauc_mrr_at_1_diff1
value: 42.806114694868
- type: nauc_mrr_at_1_max
value: 30.818773809580115
- type: nauc_mrr_at_1_std
value: 3.132175800569368
- type: nauc_mrr_at_20_diff1
value: 38.28878516887039
- type: nauc_mrr_at_20_max
value: 28.88291682526864
- type: nauc_mrr_at_20_std
value: 4.5635678164546
- type: nauc_mrr_at_3_diff1
value: 38.92127952259694
- type: nauc_mrr_at_3_max
value: 28.807748404698803
- type: nauc_mrr_at_3_std
value: 2.849609058088602
- type: nauc_mrr_at_5_diff1
value: 38.75107428963604
- type: nauc_mrr_at_5_max
value: 28.497437908040883
- type: nauc_mrr_at_5_std
value: 4.014347384415091
- type: nauc_ndcg_at_1000_diff1
value: 37.76456270291222
- type: nauc_ndcg_at_1000_max
value: 28.89838003177218
- type: nauc_ndcg_at_1000_std
value: 5.749873835705088
- type: nauc_ndcg_at_100_diff1
value: 37.364173569182555
- type: nauc_ndcg_at_100_max
value: 28.188496756099386
- type: nauc_ndcg_at_100_std
value: 6.336162952356489
- type: nauc_ndcg_at_10_diff1
value: 37.99346022671752
- type: nauc_ndcg_at_10_max
value: 27.216283907868817
- type: nauc_ndcg_at_10_std
value: 4.675349793835876
- type: nauc_ndcg_at_1_diff1
value: 42.806114694868
- type: nauc_ndcg_at_1_max
value: 30.818773809580115
- type: nauc_ndcg_at_1_std
value: 3.132175800569368
- type: nauc_ndcg_at_20_diff1
value: 37.15938715631981
- type: nauc_ndcg_at_20_max
value: 27.79557864495994
- type: nauc_ndcg_at_20_std
value: 5.100109928397954
- type: nauc_ndcg_at_3_diff1
value: 39.48583283953628
- type: nauc_ndcg_at_3_max
value: 27.134700120340693
- type: nauc_ndcg_at_3_std
value: 0.5675585179642199
- type: nauc_ndcg_at_5_diff1
value: 38.95882101952427
- type: nauc_ndcg_at_5_max
value: 26.610181412750727
- type: nauc_ndcg_at_5_std
value: 3.148006615861485
- type: nauc_precision_at_1000_diff1
value: -7.764948775245091
- type: nauc_precision_at_1000_max
value: 20.155338612433443
- type: nauc_precision_at_1000_std
value: 17.83459760938805
- type: nauc_precision_at_100_diff1
value: 6.237678147150076
- type: nauc_precision_at_100_max
value: 23.771296767151856
- type: nauc_precision_at_100_std
value: 22.753492059234574
- type: nauc_precision_at_10_diff1
value: 24.993500697049335
- type: nauc_precision_at_10_max
value: 27.990139005076152
- type: nauc_precision_at_10_std
value: 15.431533372397558
- type: nauc_precision_at_1_diff1
value: 42.806114694868
- type: nauc_precision_at_1_max
value: 30.818773809580115
- type: nauc_precision_at_1_std
value: 3.132175800569368
- type: nauc_precision_at_20_diff1
value: 17.590012469188235
- type: nauc_precision_at_20_max
value: 29.169967468169116
- type: nauc_precision_at_20_std
value: 17.493501613866094
- type: nauc_precision_at_3_diff1
value: 34.08623278149959
- type: nauc_precision_at_3_max
value: 27.285348347045286
- type: nauc_precision_at_3_std
value: 3.5484785893106574
- type: nauc_precision_at_5_diff1
value: 31.448816122094613
- type: nauc_precision_at_5_max
value: 26.885293174661605
- type: nauc_precision_at_5_std
value: 11.257484431730946
- type: nauc_recall_at_1000_diff1
value: 28.46487014213398
- type: nauc_recall_at_1000_max
value: 44.900835555926356
- type: nauc_recall_at_1000_std
value: 31.16409093849983
- type: nauc_recall_at_100_diff1
value: 26.72900863714146
- type: nauc_recall_at_100_max
value: 26.941137208153993
- type: nauc_recall_at_100_std
value: 22.621547900809624
- type: nauc_recall_at_10_diff1
value: 31.133823078109412
- type: nauc_recall_at_10_max
value: 23.89984601851163
- type: nauc_recall_at_10_std
value: 9.445198373476424
- type: nauc_recall_at_1_diff1
value: 44.48148871072693
- type: nauc_recall_at_1_max
value: 28.919146703924675
- type: nauc_recall_at_1_std
value: -0.1434376879249071
- type: nauc_recall_at_20_diff1
value: 27.26129142150393
- type: nauc_recall_at_20_max
value: 25.6868355894244
- type: nauc_recall_at_20_std
value: 11.26722787869625
- type: nauc_recall_at_3_diff1
value: 36.67176156769862
- type: nauc_recall_at_3_max
value: 24.517784284441092
- type: nauc_recall_at_3_std
value: -0.06621021628144753
- type: nauc_recall_at_5_diff1
value: 34.52566897138122
- type: nauc_recall_at_5_max
value: 22.720135055519073
- type: nauc_recall_at_5_std
value: 5.15363865803676
- type: ndcg_at_1
value: 26.554
- type: ndcg_at_10
value: 37.997
- type: ndcg_at_100
value: 43.305
- type: ndcg_at_1000
value: 45.282
- type: ndcg_at_20
value: 40.129
- type: ndcg_at_3
value: 32.057
- type: ndcg_at_5
value: 34.758
- type: precision_at_1
value: 26.554
- type: precision_at_10
value: 6.023
- type: precision_at_100
value: 0.918
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_20
value: 3.514
- type: precision_at_3
value: 13.559
- type: precision_at_5
value: 9.672
- type: recall_at_1
value: 24.316
- type: recall_at_10
value: 52.413
- type: recall_at_100
value: 76.80399999999999
- type: recall_at_1000
value: 91.623
- type: recall_at_20
value: 60.462
- type: recall_at_3
value: 36.351
- type: recall_at_5
value: 42.858000000000004
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackMathematicaRetrieval
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
split: test
type: mteb/cqadupstack-mathematica
metrics:
- type: main_score
value: 28.412
- type: map_at_1
value: 14.859
- type: map_at_10
value: 22.944
- type: map_at_100
value: 24.301000000000002
- type: map_at_1000
value: 24.422
- type: map_at_20
value: 23.699
- type: map_at_3
value: 19.88
- type: map_at_5
value: 21.617
- type: mrr_at_1
value: 18.781094527363184
- type: mrr_at_10
value: 27.092316196793792
- type: mrr_at_100
value: 28.237861305761868
- type: mrr_at_1000
value: 28.309422454313843
- type: mrr_at_20
value: 27.796436582724766
- type: mrr_at_3
value: 24.191542288557226
- type: mrr_at_5
value: 25.97014925373134
- type: nauc_map_at_1000_diff1
value: 31.282762294834683
- type: nauc_map_at_1000_max
value: 22.10753198129143
- type: nauc_map_at_1000_std
value: 7.464766818611464
- type: nauc_map_at_100_diff1
value: 31.20876547623262
- type: nauc_map_at_100_max
value: 22.04855783261337
- type: nauc_map_at_100_std
value: 7.46756154956561
- type: nauc_map_at_10_diff1
value: 32.063025777530946
- type: nauc_map_at_10_max
value: 22.192839864708276
- type: nauc_map_at_10_std
value: 6.935246733242942
- type: nauc_map_at_1_diff1
value: 37.124675662048645
- type: nauc_map_at_1_max
value: 21.705513335758486
- type: nauc_map_at_1_std
value: 5.125960085146019
- type: nauc_map_at_20_diff1
value: 31.460350111051543
- type: nauc_map_at_20_max
value: 22.01600381936477
- type: nauc_map_at_20_std
value: 7.320346261837271
- type: nauc_map_at_3_diff1
value: 33.549284246016946
- type: nauc_map_at_3_max
value: 21.3496504436454
- type: nauc_map_at_3_std
value: 5.629135047549884
- type: nauc_map_at_5_diff1
value: 33.40126100368468
- type: nauc_map_at_5_max
value: 22.07074975303988
- type: nauc_map_at_5_std
value: 6.0009506331816915
- type: nauc_mrr_at_1000_diff1
value: 31.676659452959417
- type: nauc_mrr_at_1000_max
value: 22.893987786799595
- type: nauc_mrr_at_1000_std
value: 6.023049236283401
- type: nauc_mrr_at_100_diff1
value: 31.61103328375909
- type: nauc_mrr_at_100_max
value: 22.868698340353365
- type: nauc_mrr_at_100_std
value: 6.017352015320805
- type: nauc_mrr_at_10_diff1
value: 31.953429710735765
- type: nauc_mrr_at_10_max
value: 22.88953587703519
- type: nauc_mrr_at_10_std
value: 5.736962509390694
- type: nauc_mrr_at_1_diff1
value: 35.97635527682404
- type: nauc_mrr_at_1_max
value: 22.800448037132163
- type: nauc_mrr_at_1_std
value: 3.2117385280672455
- type: nauc_mrr_at_20_diff1
value: 31.595235519229487
- type: nauc_mrr_at_20_max
value: 22.799886818509123
- type: nauc_mrr_at_20_std
value: 6.072525408593461
- type: nauc_mrr_at_3_diff1
value: 33.18342375116275
- type: nauc_mrr_at_3_max
value: 22.52374592963976
- type: nauc_mrr_at_3_std
value: 4.767522697706218
- type: nauc_mrr_at_5_diff1
value: 33.119779061591515
- type: nauc_mrr_at_5_max
value: 23.003248125501745
- type: nauc_mrr_at_5_std
value: 4.976805747506817
- type: nauc_ndcg_at_1000_diff1
value: 28.292015382102793
- type: nauc_ndcg_at_1000_max
value: 22.68404765768237
- type: nauc_ndcg_at_1000_std
value: 9.589972055962098
- type: nauc_ndcg_at_100_diff1
value: 26.96479405167567
- type: nauc_ndcg_at_100_max
value: 21.991567834408762
- type: nauc_ndcg_at_100_std
value: 10.039949830937676
- type: nauc_ndcg_at_10_diff1
value: 29.467288216868713
- type: nauc_ndcg_at_10_max
value: 22.44104565858907
- type: nauc_ndcg_at_10_std
value: 8.461186039677754
- type: nauc_ndcg_at_1_diff1
value: 35.97635527682404
- type: nauc_ndcg_at_1_max
value: 22.800448037132163
- type: nauc_ndcg_at_1_std
value: 3.2117385280672455
- type: nauc_ndcg_at_20_diff1
value: 27.651039113853848
- type: nauc_ndcg_at_20_max
value: 21.865976465118173
- type: nauc_ndcg_at_20_std
value: 9.612409644962762
- type: nauc_ndcg_at_3_diff1
value: 32.261234884088516
- type: nauc_ndcg_at_3_max
value: 21.569892122182054
- type: nauc_ndcg_at_3_std
value: 5.934094272513952
- type: nauc_ndcg_at_5_diff1
value: 32.177187585868275
- type: nauc_ndcg_at_5_max
value: 22.501692436415365
- type: nauc_ndcg_at_5_std
value: 6.628292970421619
- type: nauc_precision_at_1000_diff1
value: -3.119953273272669
- type: nauc_precision_at_1000_max
value: 1.1513386014161908
- type: nauc_precision_at_1000_std
value: -2.164470131685831
- type: nauc_precision_at_100_diff1
value: 0.5849985774022525
- type: nauc_precision_at_100_max
value: 10.237261683711365
- type: nauc_precision_at_100_std
value: 9.57755547972335
- type: nauc_precision_at_10_diff1
value: 15.246412164216192
- type: nauc_precision_at_10_max
value: 19.899416826328565
- type: nauc_precision_at_10_std
value: 10.003123363456073
- type: nauc_precision_at_1_diff1
value: 35.97635527682404
- type: nauc_precision_at_1_max
value: 22.800448037132163
- type: nauc_precision_at_1_std
value: 3.2117385280672455
- type: nauc_precision_at_20_diff1
value: 7.606434579256874
- type: nauc_precision_at_20_max
value: 15.445346072441597
- type: nauc_precision_at_20_std
value: 11.538639325143942
- type: nauc_precision_at_3_diff1
value: 25.28573060963354
- type: nauc_precision_at_3_max
value: 20.11025294163431
- type: nauc_precision_at_3_std
value: 5.4367185562279525
- type: nauc_precision_at_5_diff1
value: 23.428693353532925
- type: nauc_precision_at_5_max
value: 21.87288793778549
- type: nauc_precision_at_5_std
value: 6.350278856507092
- type: nauc_recall_at_1000_diff1
value: 11.030800804748713
- type: nauc_recall_at_1000_max
value: 28.207037540270484
- type: nauc_recall_at_1000_std
value: 26.53322787470092
- type: nauc_recall_at_100_diff1
value: 9.45619750103627
- type: nauc_recall_at_100_max
value: 18.641295313722722
- type: nauc_recall_at_100_std
value: 19.89094444759181
- type: nauc_recall_at_10_diff1
value: 21.59965548683592
- type: nauc_recall_at_10_max
value: 20.983235462917357
- type: nauc_recall_at_10_std
value: 12.421019075877183
- type: nauc_recall_at_1_diff1
value: 37.124675662048645
- type: nauc_recall_at_1_max
value: 21.705513335758486
- type: nauc_recall_at_1_std
value: 5.125960085146019
- type: nauc_recall_at_20_diff1
value: 15.356277525370507
- type: nauc_recall_at_20_max
value: 18.853996115586888
- type: nauc_recall_at_20_std
value: 16.118805288983083
- type: nauc_recall_at_3_diff1
value: 28.945843357597685
- type: nauc_recall_at_3_max
value: 19.8912702523286
- type: nauc_recall_at_3_std
value: 7.5851361764687795
- type: nauc_recall_at_5_diff1
value: 28.36471699123168
- type: nauc_recall_at_5_max
value: 21.17015525566982
- type: nauc_recall_at_5_std
value: 8.24163064970665
- type: ndcg_at_1
value: 18.781
- type: ndcg_at_10
value: 28.412
- type: ndcg_at_100
value: 34.782999999999994
- type: ndcg_at_1000
value: 37.518
- type: ndcg_at_20
value: 30.962
- type: ndcg_at_3
value: 22.782
- type: ndcg_at_5
value: 25.568
- type: precision_at_1
value: 18.781
- type: precision_at_10
value: 5.498
- type: precision_at_100
value: 0.9979999999999999
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_20
value: 3.4389999999999996
- type: precision_at_3
value: 11.193999999999999
- type: precision_at_5
value: 8.607
- type: recall_at_1
value: 14.859
- type: recall_at_10
value: 41.229
- type: recall_at_100
value: 68.853
- type: recall_at_1000
value: 87.86
- type: recall_at_20
value: 50.333000000000006
- type: recall_at_3
value: 25.889
- type: recall_at_5
value: 32.798
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackPhysicsRetrieval
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
split: test
type: mteb/cqadupstack-physics
metrics:
- type: main_score
value: 46.627
- type: map_at_1
value: 29.110999999999997
- type: map_at_10
value: 40.157
- type: map_at_100
value: 41.5
- type: map_at_1000
value: 41.615
- type: map_at_20
value: 40.907
- type: map_at_3
value: 36.741
- type: map_at_5
value: 38.532
- type: mrr_at_1
value: 35.899903753609244
- type: mrr_at_10
value: 45.89913989336511
- type: mrr_at_100
value: 46.66580649720437
- type: mrr_at_1000
value: 46.71327306556841
- type: mrr_at_20
value: 46.35048991522675
- type: mrr_at_3
value: 43.10234199550849
- type: mrr_at_5
value: 44.623034969521925
- type: nauc_map_at_1000_diff1
value: 48.819197227113
- type: nauc_map_at_1000_max
value: 30.225193936185452
- type: nauc_map_at_1000_std
value: -0.11387170703748546
- type: nauc_map_at_100_diff1
value: 48.81653586975879
- type: nauc_map_at_100_max
value: 30.18941718252035
- type: nauc_map_at_100_std
value: -0.16136876140004403
- type: nauc_map_at_10_diff1
value: 49.02711545619007
- type: nauc_map_at_10_max
value: 29.54513048209343
- type: nauc_map_at_10_std
value: -1.0721344424269519
- type: nauc_map_at_1_diff1
value: 54.55373479956459
- type: nauc_map_at_1_max
value: 28.68525621187728
- type: nauc_map_at_1_std
value: -3.828505921327198
- type: nauc_map_at_20_diff1
value: 48.80329695009258
- type: nauc_map_at_20_max
value: 29.98470278569913
- type: nauc_map_at_20_std
value: -0.4907024448684188
- type: nauc_map_at_3_diff1
value: 49.49325346627698
- type: nauc_map_at_3_max
value: 30.001118629451362
- type: nauc_map_at_3_std
value: -1.8332855957955085
- type: nauc_map_at_5_diff1
value: 49.15308703989204
- type: nauc_map_at_5_max
value: 29.9743736651634
- type: nauc_map_at_5_std
value: -1.471848560457071
- type: nauc_mrr_at_1000_diff1
value: 49.6267405356935
- type: nauc_mrr_at_1000_max
value: 31.775511464032213
- type: nauc_mrr_at_1000_std
value: 2.1941676606625573
- type: nauc_mrr_at_100_diff1
value: 49.60865287375136
- type: nauc_mrr_at_100_max
value: 31.766711114897124
- type: nauc_mrr_at_100_std
value: 2.1958339273429597
- type: nauc_mrr_at_10_diff1
value: 49.731748265273836
- type: nauc_mrr_at_10_max
value: 31.510802716434373
- type: nauc_mrr_at_10_std
value: 1.850952038635735
- type: nauc_mrr_at_1_diff1
value: 54.326742857864915
- type: nauc_mrr_at_1_max
value: 31.714793704362155
- type: nauc_mrr_at_1_std
value: 1.4094420435868311
- type: nauc_mrr_at_20_diff1
value: 49.582036904653584
- type: nauc_mrr_at_20_max
value: 31.71211967406404
- type: nauc_mrr_at_20_std
value: 2.1307901281304202
- type: nauc_mrr_at_3_diff1
value: 49.99569893552195
- type: nauc_mrr_at_3_max
value: 32.010092946562025
- type: nauc_mrr_at_3_std
value: 1.4910063885459364
- type: nauc_mrr_at_5_diff1
value: 49.40329460354263
- type: nauc_mrr_at_5_max
value: 31.990047727579483
- type: nauc_mrr_at_5_std
value: 1.663734759562975
- type: nauc_ndcg_at_1000_diff1
value: 47.146065393209135
- type: nauc_ndcg_at_1000_max
value: 31.637365672232075
- type: nauc_ndcg_at_1000_std
value: 3.2425314915817105
- type: nauc_ndcg_at_100_diff1
value: 46.96953007559477
- type: nauc_ndcg_at_100_max
value: 31.16768307276679
- type: nauc_ndcg_at_100_std
value: 2.942488981572898
- type: nauc_ndcg_at_10_diff1
value: 47.63345306694598
- type: nauc_ndcg_at_10_max
value: 29.371578333227998
- type: nauc_ndcg_at_10_std
value: 0.06472978934137909
- type: nauc_ndcg_at_1_diff1
value: 54.326742857864915
- type: nauc_ndcg_at_1_max
value: 31.714793704362155
- type: nauc_ndcg_at_1_std
value: 1.4094420435868311
- type: nauc_ndcg_at_20_diff1
value: 46.81989380207635
- type: nauc_ndcg_at_20_max
value: 30.412570241892183
- type: nauc_ndcg_at_20_std
value: 1.5075658935703282
- type: nauc_ndcg_at_3_diff1
value: 48.410857274941726
- type: nauc_ndcg_at_3_max
value: 31.365778148874384
- type: nauc_ndcg_at_3_std
value: -0.3887448200634908
- type: nauc_ndcg_at_5_diff1
value: 47.65943245882207
- type: nauc_ndcg_at_5_max
value: 30.786802287608232
- type: nauc_ndcg_at_5_std
value: -0.3340427915788538
- type: nauc_precision_at_1000_diff1
value: -13.616360194561903
- type: nauc_precision_at_1000_max
value: 4.606458024282346
- type: nauc_precision_at_1000_std
value: 20.097753702338583
- type: nauc_precision_at_100_diff1
value: -3.8203411621014363
- type: nauc_precision_at_100_max
value: 12.195338438332039
- type: nauc_precision_at_100_std
value: 21.277772831047834
- type: nauc_precision_at_10_diff1
value: 17.41015815840667
- type: nauc_precision_at_10_max
value: 20.49327554673419
- type: nauc_precision_at_10_std
value: 14.317393694887748
- type: nauc_precision_at_1_diff1
value: 54.326742857864915
- type: nauc_precision_at_1_max
value: 31.714793704362155
- type: nauc_precision_at_1_std
value: 1.4094420435868311
- type: nauc_precision_at_20_diff1
value: 8.063727537918783
- type: nauc_precision_at_20_max
value: 19.39335288125252
- type: nauc_precision_at_20_std
value: 18.93106122331836
- type: nauc_precision_at_3_diff1
value: 32.705924980475146
- type: nauc_precision_at_3_max
value: 30.24641865632296
- type: nauc_precision_at_3_std
value: 7.195922370578724
- type: nauc_precision_at_5_diff1
value: 25.471170302890012
- type: nauc_precision_at_5_max
value: 27.2559781097725
- type: nauc_precision_at_5_std
value: 10.423480799933591
- type: nauc_recall_at_1000_diff1
value: 15.871912487469162
- type: nauc_recall_at_1000_max
value: 41.69115237346833
- type: nauc_recall_at_1000_std
value: 44.74346531949558
- type: nauc_recall_at_100_diff1
value: 32.150465708991376
- type: nauc_recall_at_100_max
value: 28.9450065694084
- type: nauc_recall_at_100_std
value: 16.12971379538094
- type: nauc_recall_at_10_diff1
value: 40.42003119650161
- type: nauc_recall_at_10_max
value: 23.798461011276167
- type: nauc_recall_at_10_std
value: -0.8906910654707625
- type: nauc_recall_at_1_diff1
value: 54.55373479956459
- type: nauc_recall_at_1_max
value: 28.68525621187728
- type: nauc_recall_at_1_std
value: -3.828505921327198
- type: nauc_recall_at_20_diff1
value: 36.08908544861558
- type: nauc_recall_at_20_max
value: 26.51340931742042
- type: nauc_recall_at_20_std
value: 4.67558978611164
- type: nauc_recall_at_3_diff1
value: 44.109094420929466
- type: nauc_recall_at_3_max
value: 29.817084024730185
- type: nauc_recall_at_3_std
value: -1.9280901477621615
- type: nauc_recall_at_5_diff1
value: 41.53929190979217
- type: nauc_recall_at_5_max
value: 28.682740378721512
- type: nauc_recall_at_5_std
value: -2.1436179905847705
- type: ndcg_at_1
value: 35.9
- type: ndcg_at_10
value: 46.627
- type: ndcg_at_100
value: 52.03
- type: ndcg_at_1000
value: 53.982
- type: ndcg_at_20
value: 48.748999999999995
- type: ndcg_at_3
value: 40.96
- type: ndcg_at_5
value: 43.389
- type: precision_at_1
value: 35.9
- type: precision_at_10
value: 8.652999999999999
- type: precision_at_100
value: 1.324
- type: precision_at_1000
value: 0.168
- type: precision_at_20
value: 5.053
- type: precision_at_3
value: 19.666
- type: precision_at_5
value: 13.879
- type: recall_at_1
value: 29.110999999999997
- type: recall_at_10
value: 60.21300000000001
- type: recall_at_100
value: 82.829
- type: recall_at_1000
value: 95.236
- type: recall_at_20
value: 67.506
- type: recall_at_3
value: 44.198
- type: recall_at_5
value: 50.62
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackProgrammersRetrieval
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
split: test
type: mteb/cqadupstack-programmers
metrics:
- type: main_score
value: 42.711
- type: map_at_1
value: 25.912000000000003
- type: map_at_10
value: 36.827
- type: map_at_100
value: 38.323
- type: map_at_1000
value: 38.426
- type: map_at_20
value: 37.674
- type: map_at_3
value: 33.815
- type: map_at_5
value: 35.253
- type: mrr_at_1
value: 31.621004566210047
- type: mrr_at_10
value: 41.63106110023917
- type: mrr_at_100
value: 42.68788468525227
- type: mrr_at_1000
value: 42.737936476792896
- type: mrr_at_20
value: 42.3034469946844
- type: mrr_at_3
value: 39.44063926940638
- type: mrr_at_5
value: 40.479452054794486
- type: nauc_map_at_1000_diff1
value: 41.21238459815453
- type: nauc_map_at_1000_max
value: 31.731913362155538
- type: nauc_map_at_1000_std
value: 4.095602573199812
- type: nauc_map_at_100_diff1
value: 41.2386060670619
- type: nauc_map_at_100_max
value: 31.745863964229752
- type: nauc_map_at_100_std
value: 4.152539294264819
- type: nauc_map_at_10_diff1
value: 41.06730435812859
- type: nauc_map_at_10_max
value: 31.154866667403546
- type: nauc_map_at_10_std
value: 3.352195309556991
- type: nauc_map_at_1_diff1
value: 46.719436307788634
- type: nauc_map_at_1_max
value: 27.23331305118017
- type: nauc_map_at_1_std
value: -3.294310698511136
- type: nauc_map_at_20_diff1
value: 41.02754767769435
- type: nauc_map_at_20_max
value: 31.360864488023783
- type: nauc_map_at_20_std
value: 3.738456116200237
- type: nauc_map_at_3_diff1
value: 41.6933203031956
- type: nauc_map_at_3_max
value: 29.89624455457615
- type: nauc_map_at_3_std
value: -0.01536463182866681
- type: nauc_map_at_5_diff1
value: 40.94567456109745
- type: nauc_map_at_5_max
value: 30.458349943583702
- type: nauc_map_at_5_std
value: 1.9655221641608267
- type: nauc_mrr_at_1000_diff1
value: 40.652351064681724
- type: nauc_mrr_at_1000_max
value: 33.01007429614183
- type: nauc_mrr_at_1000_std
value: 6.26143705110491
- type: nauc_mrr_at_100_diff1
value: 40.65741819780518
- type: nauc_mrr_at_100_max
value: 33.01722581370414
- type: nauc_mrr_at_100_std
value: 6.302551967295325
- type: nauc_mrr_at_10_diff1
value: 40.60567647703471
- type: nauc_mrr_at_10_max
value: 32.94692660407874
- type: nauc_mrr_at_10_std
value: 6.082085894261765
- type: nauc_mrr_at_1_diff1
value: 46.11518802989986
- type: nauc_mrr_at_1_max
value: 31.625471357672307
- type: nauc_mrr_at_1_std
value: 1.234566602020697
- type: nauc_mrr_at_20_diff1
value: 40.558484630555064
- type: nauc_mrr_at_20_max
value: 32.97107821653968
- type: nauc_mrr_at_20_std
value: 6.265323697745393
- type: nauc_mrr_at_3_diff1
value: 40.68096006055527
- type: nauc_mrr_at_3_max
value: 32.53822188043154
- type: nauc_mrr_at_3_std
value: 4.345818715177205
- type: nauc_mrr_at_5_diff1
value: 40.23796517179139
- type: nauc_mrr_at_5_max
value: 32.56979439355811
- type: nauc_mrr_at_5_std
value: 5.595951651809914
- type: nauc_ndcg_at_1000_diff1
value: 39.7027614173243
- type: nauc_ndcg_at_1000_max
value: 33.498346699070375
- type: nauc_ndcg_at_1000_std
value: 8.559325736291138
- type: nauc_ndcg_at_100_diff1
value: 39.97452504741169
- type: nauc_ndcg_at_100_max
value: 33.89577471481737
- type: nauc_ndcg_at_100_std
value: 10.167129337536283
- type: nauc_ndcg_at_10_diff1
value: 39.16788466313522
- type: nauc_ndcg_at_10_max
value: 32.47905308816861
- type: nauc_ndcg_at_10_std
value: 7.295048419911472
- type: nauc_ndcg_at_1_diff1
value: 46.11518802989986
- type: nauc_ndcg_at_1_max
value: 31.625471357672307
- type: nauc_ndcg_at_1_std
value: 1.234566602020697
- type: nauc_ndcg_at_20_diff1
value: 38.859039216458626
- type: nauc_ndcg_at_20_max
value: 32.741280842100274
- type: nauc_ndcg_at_20_std
value: 8.532519680049697
- type: nauc_ndcg_at_3_diff1
value: 39.50414846792753
- type: nauc_ndcg_at_3_max
value: 31.436293574105246
- type: nauc_ndcg_at_3_std
value: 2.7912054661515513
- type: nauc_ndcg_at_5_diff1
value: 38.70681148905142
- type: nauc_ndcg_at_5_max
value: 31.437135456835662
- type: nauc_ndcg_at_5_std
value: 5.162466911691187
- type: nauc_precision_at_1000_diff1
value: -3.3602607374185633
- type: nauc_precision_at_1000_max
value: 4.971880762242277
- type: nauc_precision_at_1000_std
value: 9.19452758668974
- type: nauc_precision_at_100_diff1
value: 7.510065324630119
- type: nauc_precision_at_100_max
value: 20.08725395064176
- type: nauc_precision_at_100_std
value: 24.3347599479104
- type: nauc_precision_at_10_diff1
value: 17.288987492657895
- type: nauc_precision_at_10_max
value: 30.523796629978005
- type: nauc_precision_at_10_std
value: 21.72855091830218
- type: nauc_precision_at_1_diff1
value: 46.11518802989986
- type: nauc_precision_at_1_max
value: 31.625471357672307
- type: nauc_precision_at_1_std
value: 1.234566602020697
- type: nauc_precision_at_20_diff1
value: 12.228489950055032
- type: nauc_precision_at_20_max
value: 27.04368010402764
- type: nauc_precision_at_20_std
value: 24.15754031166108
- type: nauc_precision_at_3_diff1
value: 26.83713388263207
- type: nauc_precision_at_3_max
value: 33.23777507125749
- type: nauc_precision_at_3_std
value: 10.323356806632543
- type: nauc_precision_at_5_diff1
value: 21.61560839260508
- type: nauc_precision_at_5_max
value: 32.66946145310579
- type: nauc_precision_at_5_std
value: 16.353775624744003
- type: nauc_recall_at_1000_diff1
value: 18.969678611942875
- type: nauc_recall_at_1000_max
value: 44.65492230931943
- type: nauc_recall_at_1000_std
value: 57.57661658969986
- type: nauc_recall_at_100_diff1
value: 32.144682780578435
- type: nauc_recall_at_100_max
value: 39.039873233473685
- type: nauc_recall_at_100_std
value: 41.27073159300163
- type: nauc_recall_at_10_diff1
value: 32.15567564970661
- type: nauc_recall_at_10_max
value: 32.11964259760779
- type: nauc_recall_at_10_std
value: 15.891022254121328
- type: nauc_recall_at_1_diff1
value: 46.719436307788634
- type: nauc_recall_at_1_max
value: 27.23331305118017
- type: nauc_recall_at_1_std
value: -3.294310698511136
- type: nauc_recall_at_20_diff1
value: 28.851896672624644
- type: nauc_recall_at_20_max
value: 32.287799296155114
- type: nauc_recall_at_20_std
value: 21.67937291007234
- type: nauc_recall_at_3_diff1
value: 34.39542239770237
- type: nauc_recall_at_3_max
value: 28.587385654425223
- type: nauc_recall_at_3_std
value: 3.1462139418981865
- type: nauc_recall_at_5_diff1
value: 31.662335151844633
- type: nauc_recall_at_5_max
value: 29.169339984865907
- type: nauc_recall_at_5_std
value: 9.423550205691733
- type: ndcg_at_1
value: 31.621
- type: ndcg_at_10
value: 42.711
- type: ndcg_at_100
value: 49.033
- type: ndcg_at_1000
value: 51.085
- type: ndcg_at_20
value: 45.443
- type: ndcg_at_3
value: 38.005
- type: ndcg_at_5
value: 39.751999999999995
- type: precision_at_1
value: 31.621
- type: precision_at_10
value: 7.968
- type: precision_at_100
value: 1.2890000000000001
- type: precision_at_1000
value: 0.163
- type: precision_at_20
value: 4.795
- type: precision_at_3
value: 18.379
- type: precision_at_5
value: 12.740000000000002
- type: recall_at_1
value: 25.912000000000003
- type: recall_at_10
value: 55.08
- type: recall_at_100
value: 81.922
- type: recall_at_1000
value: 95.543
- type: recall_at_20
value: 65.082
- type: recall_at_3
value: 41.899
- type: recall_at_5
value: 46.708
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackRetrieval
revision: CQADupstackRetrieval_is_a_combined_dataset
split: test
type: CQADupstackRetrieval_is_a_combined_dataset
metrics:
- type: main_score
value: 40.85841666666668
- type: ndcg_at_10
value: 40.85841666666668
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackStatsRetrieval
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
split: test
type: mteb/cqadupstack-stats
metrics:
- type: main_score
value: 35.053
- type: map_at_1
value: 23.291999999999998
- type: map_at_10
value: 30.61
- type: map_at_100
value: 31.549
- type: map_at_1000
value: 31.644
- type: map_at_20
value: 31.041999999999998
- type: map_at_3
value: 28.011999999999997
- type: map_at_5
value: 29.425
- type: mrr_at_1
value: 26.07361963190184
- type: mrr_at_10
value: 33.167421365274116
- type: mrr_at_100
value: 34.029277736438495
- type: mrr_at_1000
value: 34.09235069536584
- type: mrr_at_20
value: 33.59034373634008
- type: mrr_at_3
value: 30.7515337423313
- type: mrr_at_5
value: 32.08588957055215
- type: nauc_map_at_1000_diff1
value: 43.481986624980415
- type: nauc_map_at_1000_max
value: 28.952163698686732
- type: nauc_map_at_1000_std
value: 10.782598183324414
- type: nauc_map_at_100_diff1
value: 43.45584335416967
- type: nauc_map_at_100_max
value: 28.911137574377232
- type: nauc_map_at_100_std
value: 10.76701853563041
- type: nauc_map_at_10_diff1
value: 43.47116890578832
- type: nauc_map_at_10_max
value: 28.653166569212946
- type: nauc_map_at_10_std
value: 10.17426104854042
- type: nauc_map_at_1_diff1
value: 49.75958213376796
- type: nauc_map_at_1_max
value: 24.470618320089454
- type: nauc_map_at_1_std
value: 6.492564751094104
- type: nauc_map_at_20_diff1
value: 43.35481926885264
- type: nauc_map_at_20_max
value: 28.699469771138414
- type: nauc_map_at_20_std
value: 10.45940778146071
- type: nauc_map_at_3_diff1
value: 44.485234854591035
- type: nauc_map_at_3_max
value: 28.38719705365597
- type: nauc_map_at_3_std
value: 9.000376354032333
- type: nauc_map_at_5_diff1
value: 43.44946037663669
- type: nauc_map_at_5_max
value: 28.476659272609623
- type: nauc_map_at_5_std
value: 9.703474173706583
- type: nauc_mrr_at_1000_diff1
value: 45.954395007886525
- type: nauc_mrr_at_1000_max
value: 31.50968463706721
- type: nauc_mrr_at_1000_std
value: 13.707444407915146
- type: nauc_mrr_at_100_diff1
value: 45.93279568895946
- type: nauc_mrr_at_100_max
value: 31.49035735663133
- type: nauc_mrr_at_100_std
value: 13.696695107846951
- type: nauc_mrr_at_10_diff1
value: 46.00075381149564
- type: nauc_mrr_at_10_max
value: 31.35587522300911
- type: nauc_mrr_at_10_std
value: 13.319928784978059
- type: nauc_mrr_at_1_diff1
value: 53.86601247498458
- type: nauc_mrr_at_1_max
value: 29.05934941003339
- type: nauc_mrr_at_1_std
value: 10.991599490187589
- type: nauc_mrr_at_20_diff1
value: 45.86633939971638
- type: nauc_mrr_at_20_max
value: 31.355545429804543
- type: nauc_mrr_at_20_std
value: 13.461168244272576
- type: nauc_mrr_at_3_diff1
value: 47.46632656927442
- type: nauc_mrr_at_3_max
value: 31.868101191363152
- type: nauc_mrr_at_3_std
value: 13.134952192744528
- type: nauc_mrr_at_5_diff1
value: 46.216287976414655
- type: nauc_mrr_at_5_max
value: 31.22808984287798
- type: nauc_mrr_at_5_std
value: 13.052212637671804
- type: nauc_ndcg_at_1000_diff1
value: 41.636814427170584
- type: nauc_ndcg_at_1000_max
value: 31.493143528814294
- type: nauc_ndcg_at_1000_std
value: 14.770912529263397
- type: nauc_ndcg_at_100_diff1
value: 41.12015328320773
- type: nauc_ndcg_at_100_max
value: 30.74936949964077
- type: nauc_ndcg_at_100_std
value: 14.126317942292099
- type: nauc_ndcg_at_10_diff1
value: 41.363853256357004
- type: nauc_ndcg_at_10_max
value: 29.967593685883593
- type: nauc_ndcg_at_10_std
value: 11.745736297343958
- type: nauc_ndcg_at_1_diff1
value: 53.86601247498458
- type: nauc_ndcg_at_1_max
value: 29.05934941003339
- type: nauc_ndcg_at_1_std
value: 10.991599490187589
- type: nauc_ndcg_at_20_diff1
value: 40.75029632252196
- type: nauc_ndcg_at_20_max
value: 29.8909640874289
- type: nauc_ndcg_at_20_std
value: 12.454934718956409
- type: nauc_ndcg_at_3_diff1
value: 43.63306400143029
- type: nauc_ndcg_at_3_max
value: 30.487292567301395
- type: nauc_ndcg_at_3_std
value: 11.38385449149101
- type: nauc_ndcg_at_5_diff1
value: 41.60699357804944
- type: nauc_ndcg_at_5_max
value: 29.677122670631594
- type: nauc_ndcg_at_5_std
value: 11.219704931901058
- type: nauc_precision_at_1000_diff1
value: 14.098873228986914
- type: nauc_precision_at_1000_max
value: 24.17087547157802
- type: nauc_precision_at_1000_std
value: 19.888193749463685
- type: nauc_precision_at_100_diff1
value: 23.179467074556886
- type: nauc_precision_at_100_max
value: 31.865564772690984
- type: nauc_precision_at_100_std
value: 25.13985731761706
- type: nauc_precision_at_10_diff1
value: 32.107718641883146
- type: nauc_precision_at_10_max
value: 34.91859600075913
- type: nauc_precision_at_10_std
value: 22.79400955617237
- type: nauc_precision_at_1_diff1
value: 53.86601247498458
- type: nauc_precision_at_1_max
value: 29.05934941003339
- type: nauc_precision_at_1_std
value: 10.991599490187589
- type: nauc_precision_at_20_diff1
value: 29.993188469468002
- type: nauc_precision_at_20_max
value: 35.296458769573086
- type: nauc_precision_at_20_std
value: 24.20327572204019
- type: nauc_precision_at_3_diff1
value: 38.99151580407392
- type: nauc_precision_at_3_max
value: 36.357023065975284
- type: nauc_precision_at_3_std
value: 19.43463406590944
- type: nauc_precision_at_5_diff1
value: 34.334835167755124
- type: nauc_precision_at_5_max
value: 35.54403568911307
- type: nauc_precision_at_5_std
value: 21.297076675377635
- type: nauc_recall_at_1000_diff1
value: 21.37160644447469
- type: nauc_recall_at_1000_max
value: 42.69368632941223
- type: nauc_recall_at_1000_std
value: 44.69786965651591
- type: nauc_recall_at_100_diff1
value: 26.1829124199152
- type: nauc_recall_at_100_max
value: 31.05778051148635
- type: nauc_recall_at_100_std
value: 24.13788905724134
- type: nauc_recall_at_10_diff1
value: 32.277913345812316
- type: nauc_recall_at_10_max
value: 29.95426768325743
- type: nauc_recall_at_10_std
value: 12.182289596195755
- type: nauc_recall_at_1_diff1
value: 49.75958213376796
- type: nauc_recall_at_1_max
value: 24.470618320089454
- type: nauc_recall_at_1_std
value: 6.492564751094104
- type: nauc_recall_at_20_diff1
value: 28.594583651409373
- type: nauc_recall_at_20_max
value: 28.61050190860186
- type: nauc_recall_at_20_std
value: 14.453928140032604
- type: nauc_recall_at_3_diff1
value: 37.26827475373021
- type: nauc_recall_at_3_max
value: 30.24664533196025
- type: nauc_recall_at_3_std
value: 10.088814497838317
- type: nauc_recall_at_5_diff1
value: 33.012511168504346
- type: nauc_recall_at_5_max
value: 28.863956457849227
- type: nauc_recall_at_5_std
value: 10.866060080770383
- type: ndcg_at_1
value: 26.074
- type: ndcg_at_10
value: 35.053
- type: ndcg_at_100
value: 39.877
- type: ndcg_at_1000
value: 42.219
- type: ndcg_at_20
value: 36.553999999999995
- type: ndcg_at_3
value: 30.25
- type: ndcg_at_5
value: 32.46
- type: precision_at_1
value: 26.074
- type: precision_at_10
value: 5.675
- type: precision_at_100
value: 0.88
- type: precision_at_1000
value: 0.116
- type: precision_at_20
value: 3.213
- type: precision_at_3
value: 13.088
- type: precision_at_5
value: 9.325
- type: recall_at_1
value: 23.291999999999998
- type: recall_at_10
value: 46.148
- type: recall_at_100
value: 68.24799999999999
- type: recall_at_1000
value: 85.455
- type: recall_at_20
value: 51.734
- type: recall_at_3
value: 33.131
- type: recall_at_5
value: 38.546
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackTexRetrieval
revision: 46989137a86843e03a6195de44b09deda022eec7
split: test
type: mteb/cqadupstack-tex
metrics:
- type: main_score
value: 27.525
- type: map_at_1
value: 15.812999999999999
- type: map_at_10
value: 22.878999999999998
- type: map_at_100
value: 23.992
- type: map_at_1000
value: 24.127000000000002
- type: map_at_20
value: 23.452
- type: map_at_3
value: 20.467
- type: map_at_5
value: 21.767
- type: mrr_at_1
value: 19.201651754989676
- type: mrr_at_10
value: 26.47224133975686
- type: mrr_at_100
value: 27.412766979166342
- type: mrr_at_1000
value: 27.49631476670978
- type: mrr_at_20
value: 26.9691663879413
- type: mrr_at_3
value: 24.013535214498745
- type: mrr_at_5
value: 25.441615049323303
- type: nauc_map_at_1000_diff1
value: 33.697813145909535
- type: nauc_map_at_1000_max
value: 26.509494140027996
- type: nauc_map_at_1000_std
value: 2.993849542775133
- type: nauc_map_at_100_diff1
value: 33.671087749349674
- type: nauc_map_at_100_max
value: 26.472678055525336
- type: nauc_map_at_100_std
value: 2.956494527720355
- type: nauc_map_at_10_diff1
value: 33.914740537035435
- type: nauc_map_at_10_max
value: 26.5349486074814
- type: nauc_map_at_10_std
value: 2.3474576992114304
- type: nauc_map_at_1_diff1
value: 39.451484530341254
- type: nauc_map_at_1_max
value: 25.790802427354205
- type: nauc_map_at_1_std
value: -1.8340911432347162
- type: nauc_map_at_20_diff1
value: 33.766215747904695
- type: nauc_map_at_20_max
value: 26.440032024795805
- type: nauc_map_at_20_std
value: 2.6591992745156485
- type: nauc_map_at_3_diff1
value: 34.80477662436832
- type: nauc_map_at_3_max
value: 26.232579057821294
- type: nauc_map_at_3_std
value: 1.0628053044692038
- type: nauc_map_at_5_diff1
value: 34.44953511091354
- type: nauc_map_at_5_max
value: 26.329117036695354
- type: nauc_map_at_5_std
value: 1.6829673952842554
- type: nauc_mrr_at_1000_diff1
value: 33.13732180476133
- type: nauc_mrr_at_1000_max
value: 27.911825182206524
- type: nauc_mrr_at_1000_std
value: 3.570486982023914
- type: nauc_mrr_at_100_diff1
value: 33.112653270534636
- type: nauc_mrr_at_100_max
value: 27.897770062852732
- type: nauc_mrr_at_100_std
value: 3.5920129247128028
- type: nauc_mrr_at_10_diff1
value: 33.27584578509099
- type: nauc_mrr_at_10_max
value: 28.123344470902044
- type: nauc_mrr_at_10_std
value: 3.1806023776161005
- type: nauc_mrr_at_1_diff1
value: 38.697906401251565
- type: nauc_mrr_at_1_max
value: 27.526788964221176
- type: nauc_mrr_at_1_std
value: -0.3872399197836332
- type: nauc_mrr_at_20_diff1
value: 33.14710189298942
- type: nauc_mrr_at_20_max
value: 27.925418071214477
- type: nauc_mrr_at_20_std
value: 3.410762781508218
- type: nauc_mrr_at_3_diff1
value: 33.87772552463924
- type: nauc_mrr_at_3_max
value: 28.007003297502216
- type: nauc_mrr_at_3_std
value: 1.9486591805981224
- type: nauc_mrr_at_5_diff1
value: 33.62067092202846
- type: nauc_mrr_at_5_max
value: 28.14249070532696
- type: nauc_mrr_at_5_std
value: 2.6447040667824218
- type: nauc_ndcg_at_1000_diff1
value: 31.23455010115525
- type: nauc_ndcg_at_1000_max
value: 26.928025566178913
- type: nauc_ndcg_at_1000_std
value: 6.941305960469611
- type: nauc_ndcg_at_100_diff1
value: 30.584344786502747
- type: nauc_ndcg_at_100_max
value: 26.404821521795537
- type: nauc_ndcg_at_100_std
value: 7.0334275625510925
- type: nauc_ndcg_at_10_diff1
value: 31.53451395934299
- type: nauc_ndcg_at_10_max
value: 27.05918031675037
- type: nauc_ndcg_at_10_std
value: 4.439717091540959
- type: nauc_ndcg_at_1_diff1
value: 38.697906401251565
- type: nauc_ndcg_at_1_max
value: 27.526788964221176
- type: nauc_ndcg_at_1_std
value: -0.3872399197836332
- type: nauc_ndcg_at_20_diff1
value: 31.12144557343197
- type: nauc_ndcg_at_20_max
value: 26.542119575357965
- type: nauc_ndcg_at_20_std
value: 5.3406069749732525
- type: nauc_ndcg_at_3_diff1
value: 33.01724233874462
- type: nauc_ndcg_at_3_max
value: 27.140135730286946
- type: nauc_ndcg_at_3_std
value: 1.9208853678075062
- type: nauc_ndcg_at_5_diff1
value: 32.55051796045806
- type: nauc_ndcg_at_5_max
value: 26.955239421636346
- type: nauc_ndcg_at_5_std
value: 3.0379868805913652
- type: nauc_precision_at_1000_diff1
value: 4.618759880285172
- type: nauc_precision_at_1000_max
value: 15.135402391589992
- type: nauc_precision_at_1000_std
value: 17.641125584501353
- type: nauc_precision_at_100_diff1
value: 10.39883535965785
- type: nauc_precision_at_100_max
value: 20.08846103789256
- type: nauc_precision_at_100_std
value: 19.449422467727224
- type: nauc_precision_at_10_diff1
value: 22.298962818126192
- type: nauc_precision_at_10_max
value: 28.89863016237585
- type: nauc_precision_at_10_std
value: 11.063401323032155
- type: nauc_precision_at_1_diff1
value: 38.697906401251565
- type: nauc_precision_at_1_max
value: 27.526788964221176
- type: nauc_precision_at_1_std
value: -0.3872399197836332
- type: nauc_precision_at_20_diff1
value: 19.176385926878414
- type: nauc_precision_at_20_max
value: 25.917593281871675
- type: nauc_precision_at_20_std
value: 13.11450466413103
- type: nauc_precision_at_3_diff1
value: 28.031695189128474
- type: nauc_precision_at_3_max
value: 28.9642194082244
- type: nauc_precision_at_3_std
value: 4.347834807504182
- type: nauc_precision_at_5_diff1
value: 26.272317529418892
- type: nauc_precision_at_5_max
value: 29.150315424317114
- type: nauc_precision_at_5_std
value: 6.880885398540699
- type: nauc_recall_at_1000_diff1
value: 17.4273150148978
- type: nauc_recall_at_1000_max
value: 24.306401198860677
- type: nauc_recall_at_1000_std
value: 29.662613615698568
- type: nauc_recall_at_100_diff1
value: 18.43107428764886
- type: nauc_recall_at_100_max
value: 20.971000173192305
- type: nauc_recall_at_100_std
value: 19.71647423515453
- type: nauc_recall_at_10_diff1
value: 24.16733448276029
- type: nauc_recall_at_10_max
value: 24.352699469715134
- type: nauc_recall_at_10_std
value: 8.209628518853242
- type: nauc_recall_at_1_diff1
value: 39.451484530341254
- type: nauc_recall_at_1_max
value: 25.790802427354205
- type: nauc_recall_at_1_std
value: -1.8340911432347162
- type: nauc_recall_at_20_diff1
value: 22.67002641081412
- type: nauc_recall_at_20_max
value: 22.634810976567632
- type: nauc_recall_at_20_std
value: 11.08185078231441
- type: nauc_recall_at_3_diff1
value: 28.883409519249298
- type: nauc_recall_at_3_max
value: 25.08426193015333
- type: nauc_recall_at_3_std
value: 3.332702402821052
- type: nauc_recall_at_5_diff1
value: 27.248817428767353
- type: nauc_recall_at_5_max
value: 24.488697770907862
- type: nauc_recall_at_5_std
value: 5.150559322926742
- type: ndcg_at_1
value: 19.201999999999998
- type: ndcg_at_10
value: 27.525
- type: ndcg_at_100
value: 32.917
- type: ndcg_at_1000
value: 36.071999999999996
- type: ndcg_at_20
value: 29.369
- type: ndcg_at_3
value: 22.997999999999998
- type: ndcg_at_5
value: 25.089
- type: precision_at_1
value: 19.201999999999998
- type: precision_at_10
value: 5.114
- type: precision_at_100
value: 0.914
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_20
value: 3.068
- type: precision_at_3
value: 10.84
- type: precision_at_5
value: 8.039
- type: recall_at_1
value: 15.812999999999999
- type: recall_at_10
value: 38.011
- type: recall_at_100
value: 62.316
- type: recall_at_1000
value: 84.787
- type: recall_at_20
value: 44.796
- type: recall_at_3
value: 25.534000000000002
- type: recall_at_5
value: 30.869000000000003
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackUnixRetrieval
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
split: test
type: mteb/cqadupstack-unix
metrics:
- type: main_score
value: 40.521
- type: map_at_1
value: 25.471
- type: map_at_10
value: 35.022
- type: map_at_100
value: 36.189
- type: map_at_1000
value: 36.307
- type: map_at_20
value: 35.669000000000004
- type: map_at_3
value: 32.106
- type: map_at_5
value: 33.77
- type: mrr_at_1
value: 30.50373134328358
- type: mrr_at_10
value: 39.24503227908077
- type: mrr_at_100
value: 40.151748706499774
- type: mrr_at_1000
value: 40.22220003252721
- type: mrr_at_20
value: 39.8059758148897
- type: mrr_at_3
value: 36.73818407960197
- type: mrr_at_5
value: 38.1467661691542
- type: nauc_map_at_1000_diff1
value: 46.16248622121721
- type: nauc_map_at_1000_max
value: 40.52646385518007
- type: nauc_map_at_1000_std
value: 3.148266275747802
- type: nauc_map_at_100_diff1
value: 46.17396178097105
- type: nauc_map_at_100_max
value: 40.54391828366793
- type: nauc_map_at_100_std
value: 3.1465539515114047
- type: nauc_map_at_10_diff1
value: 46.235749959339614
- type: nauc_map_at_10_max
value: 40.440734073263016
- type: nauc_map_at_10_std
value: 2.8300771576177626
- type: nauc_map_at_1_diff1
value: 52.31836017894301
- type: nauc_map_at_1_max
value: 39.98411755588766
- type: nauc_map_at_1_std
value: -1.3807664175034557
- type: nauc_map_at_20_diff1
value: 46.22956225666944
- type: nauc_map_at_20_max
value: 40.38149532254275
- type: nauc_map_at_20_std
value: 2.9376913527139608
- type: nauc_map_at_3_diff1
value: 46.66513417112219
- type: nauc_map_at_3_max
value: 39.42343560398367
- type: nauc_map_at_3_std
value: 1.2211402555017814
- type: nauc_map_at_5_diff1
value: 46.458786087674014
- type: nauc_map_at_5_max
value: 40.55062568009025
- type: nauc_map_at_5_std
value: 2.874713984722366
- type: nauc_mrr_at_1000_diff1
value: 45.02880964596229
- type: nauc_mrr_at_1000_max
value: 40.54670837151151
- type: nauc_mrr_at_1000_std
value: 1.9361943758959246
- type: nauc_mrr_at_100_diff1
value: 45.0141231687371
- type: nauc_mrr_at_100_max
value: 40.563093939846254
- type: nauc_mrr_at_100_std
value: 1.95631717346565
- type: nauc_mrr_at_10_diff1
value: 45.02510345908053
- type: nauc_mrr_at_10_max
value: 40.65201686211006
- type: nauc_mrr_at_10_std
value: 1.765797491494287
- type: nauc_mrr_at_1_diff1
value: 50.97368399162673
- type: nauc_mrr_at_1_max
value: 40.90768065197206
- type: nauc_mrr_at_1_std
value: -1.4950717729817018
- type: nauc_mrr_at_20_diff1
value: 45.01757033486232
- type: nauc_mrr_at_20_max
value: 40.469096874526066
- type: nauc_mrr_at_20_std
value: 1.8814650823309433
- type: nauc_mrr_at_3_diff1
value: 45.41619994832078
- type: nauc_mrr_at_3_max
value: 39.97134246014811
- type: nauc_mrr_at_3_std
value: 0.351963662304222
- type: nauc_mrr_at_5_diff1
value: 45.1751735123411
- type: nauc_mrr_at_5_max
value: 40.78799409404439
- type: nauc_mrr_at_5_std
value: 1.9642777530569973
- type: nauc_ndcg_at_1000_diff1
value: 43.718675542961904
- type: nauc_ndcg_at_1000_max
value: 40.77838921628359
- type: nauc_ndcg_at_1000_std
value: 5.566597131514415
- type: nauc_ndcg_at_100_diff1
value: 43.60801649469792
- type: nauc_ndcg_at_100_max
value: 41.178769387330796
- type: nauc_ndcg_at_100_std
value: 6.049517999609993
- type: nauc_ndcg_at_10_diff1
value: 43.842412361059004
- type: nauc_ndcg_at_10_max
value: 40.6519609548175
- type: nauc_ndcg_at_10_std
value: 4.201266898997162
- type: nauc_ndcg_at_1_diff1
value: 50.97368399162673
- type: nauc_ndcg_at_1_max
value: 40.90768065197206
- type: nauc_ndcg_at_1_std
value: -1.4950717729817018
- type: nauc_ndcg_at_20_diff1
value: 43.85304850871846
- type: nauc_ndcg_at_20_max
value: 40.32052013131906
- type: nauc_ndcg_at_20_std
value: 4.728903608087234
- type: nauc_ndcg_at_3_diff1
value: 44.21918974277671
- type: nauc_ndcg_at_3_max
value: 38.960642621790456
- type: nauc_ndcg_at_3_std
value: 1.5413581396590283
- type: nauc_ndcg_at_5_diff1
value: 44.17111959292946
- type: nauc_ndcg_at_5_max
value: 40.879393486870796
- type: nauc_ndcg_at_5_std
value: 4.292430322369627
- type: nauc_precision_at_1000_diff1
value: -15.217116951096473
- type: nauc_precision_at_1000_max
value: -3.2195266520788293
- type: nauc_precision_at_1000_std
value: 3.9128797066726846
- type: nauc_precision_at_100_diff1
value: 0.3739578597713093
- type: nauc_precision_at_100_max
value: 16.020214815116475
- type: nauc_precision_at_100_std
value: 12.407216133940173
- type: nauc_precision_at_10_diff1
value: 22.78622694355213
- type: nauc_precision_at_10_max
value: 30.934571158762775
- type: nauc_precision_at_10_std
value: 7.387132441153662
- type: nauc_precision_at_1_diff1
value: 50.97368399162673
- type: nauc_precision_at_1_max
value: 40.90768065197206
- type: nauc_precision_at_1_std
value: -1.4950717729817018
- type: nauc_precision_at_20_diff1
value: 15.851699766979477
- type: nauc_precision_at_20_max
value: 25.760376623349373
- type: nauc_precision_at_20_std
value: 8.843769866250064
- type: nauc_precision_at_3_diff1
value: 33.40916192309544
- type: nauc_precision_at_3_max
value: 34.62137182252703
- type: nauc_precision_at_3_std
value: 2.6723118388566376
- type: nauc_precision_at_5_diff1
value: 29.839568032323736
- type: nauc_precision_at_5_max
value: 35.79411746926457
- type: nauc_precision_at_5_std
value: 8.075263629982045
- type: nauc_recall_at_1000_diff1
value: 22.684337017050314
- type: nauc_recall_at_1000_max
value: 38.75083488225343
- type: nauc_recall_at_1000_std
value: 46.20014728505404
- type: nauc_recall_at_100_diff1
value: 32.16637906784691
- type: nauc_recall_at_100_max
value: 41.16460712003215
- type: nauc_recall_at_100_std
value: 22.666195059036536
- type: nauc_recall_at_10_diff1
value: 35.53872376778553
- type: nauc_recall_at_10_max
value: 38.239674930598554
- type: nauc_recall_at_10_std
value: 8.764170731037375
- type: nauc_recall_at_1_diff1
value: 52.31836017894301
- type: nauc_recall_at_1_max
value: 39.98411755588766
- type: nauc_recall_at_1_std
value: -1.3807664175034557
- type: nauc_recall_at_20_diff1
value: 34.77159952615243
- type: nauc_recall_at_20_max
value: 35.99268561688956
- type: nauc_recall_at_20_std
value: 11.063781846789626
- type: nauc_recall_at_3_diff1
value: 38.59836732978252
- type: nauc_recall_at_3_max
value: 36.14336770585555
- type: nauc_recall_at_3_std
value: 3.330194066081952
- type: nauc_recall_at_5_diff1
value: 37.471534644016785
- type: nauc_recall_at_5_max
value: 39.941421167584906
- type: nauc_recall_at_5_std
value: 9.330375158059901
- type: ndcg_at_1
value: 30.503999999999998
- type: ndcg_at_10
value: 40.521
- type: ndcg_at_100
value: 45.869
- type: ndcg_at_1000
value: 48.381
- type: ndcg_at_20
value: 42.664
- type: ndcg_at_3
value: 35.537
- type: ndcg_at_5
value: 37.874
- type: precision_at_1
value: 30.503999999999998
- type: precision_at_10
value: 6.922000000000001
- type: precision_at_100
value: 1.087
- type: precision_at_1000
value: 0.14100000000000001
- type: precision_at_20
value: 4.062
- type: precision_at_3
value: 16.448999999999998
- type: precision_at_5
value: 11.53
- type: recall_at_1
value: 25.471
- type: recall_at_10
value: 53.115
- type: recall_at_100
value: 76.247
- type: recall_at_1000
value: 93.633
- type: recall_at_20
value: 60.856
- type: recall_at_3
value: 39.149
- type: recall_at_5
value: 45.355000000000004
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackWebmastersRetrieval
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
split: test
type: mteb/cqadupstack-webmasters
metrics:
- type: main_score
value: 40.083999999999996
- type: map_at_1
value: 23.916
- type: map_at_10
value: 33.898
- type: map_at_100
value: 35.524
- type: map_at_1000
value: 35.763
- type: map_at_20
value: 34.699999999999996
- type: map_at_3
value: 30.72
- type: map_at_5
value: 32.444
- type: mrr_at_1
value: 27.865612648221344
- type: mrr_at_10
value: 38.110373925591325
- type: mrr_at_100
value: 39.103867355603136
- type: mrr_at_1000
value: 39.155682308778616
- type: mrr_at_20
value: 38.674497071725696
- type: mrr_at_3
value: 35.210803689064576
- type: mrr_at_5
value: 36.99934123847168
- type: nauc_map_at_1000_diff1
value: 43.04370425444575
- type: nauc_map_at_1000_max
value: 30.664333341508517
- type: nauc_map_at_1000_std
value: 13.255841990616501
- type: nauc_map_at_100_diff1
value: 43.30950288942624
- type: nauc_map_at_100_max
value: 30.88701122881409
- type: nauc_map_at_100_std
value: 13.044875063416047
- type: nauc_map_at_10_diff1
value: 43.13368196505275
- type: nauc_map_at_10_max
value: 30.510777038103758
- type: nauc_map_at_10_std
value: 11.718306205503097
- type: nauc_map_at_1_diff1
value: 51.34182005448936
- type: nauc_map_at_1_max
value: 29.964954096304396
- type: nauc_map_at_1_std
value: 7.929661027160745
- type: nauc_map_at_20_diff1
value: 43.31624587145438
- type: nauc_map_at_20_max
value: 30.74154334111207
- type: nauc_map_at_20_std
value: 12.32652647361836
- type: nauc_map_at_3_diff1
value: 43.1491545591217
- type: nauc_map_at_3_max
value: 29.448225669130128
- type: nauc_map_at_3_std
value: 9.735131796506169
- type: nauc_map_at_5_diff1
value: 43.33647018722699
- type: nauc_map_at_5_max
value: 29.82004211927872
- type: nauc_map_at_5_std
value: 10.811941747327253
- type: nauc_mrr_at_1000_diff1
value: 42.09165265772457
- type: nauc_mrr_at_1000_max
value: 32.05875923131647
- type: nauc_mrr_at_1000_std
value: 15.019814870801303
- type: nauc_mrr_at_100_diff1
value: 42.08967964203582
- type: nauc_mrr_at_100_max
value: 32.07299417006864
- type: nauc_mrr_at_100_std
value: 15.057319380447614
- type: nauc_mrr_at_10_diff1
value: 41.841369406148246
- type: nauc_mrr_at_10_max
value: 31.767693589635538
- type: nauc_mrr_at_10_std
value: 14.602638735669798
- type: nauc_mrr_at_1_diff1
value: 50.062677615419304
- type: nauc_mrr_at_1_max
value: 33.35584104516006
- type: nauc_mrr_at_1_std
value: 11.42115012466949
- type: nauc_mrr_at_20_diff1
value: 41.93352325907799
- type: nauc_mrr_at_20_max
value: 32.015602545857945
- type: nauc_mrr_at_20_std
value: 15.048275956047814
- type: nauc_mrr_at_3_diff1
value: 41.918393480229014
- type: nauc_mrr_at_3_max
value: 31.253629045078224
- type: nauc_mrr_at_3_std
value: 13.577771791747217
- type: nauc_mrr_at_5_diff1
value: 42.020303609879015
- type: nauc_mrr_at_5_max
value: 31.71276631449414
- type: nauc_mrr_at_5_std
value: 14.160071868742637
- type: nauc_ndcg_at_1000_diff1
value: 41.073313917406516
- type: nauc_ndcg_at_1000_max
value: 31.874785583667343
- type: nauc_ndcg_at_1000_std
value: 17.392846103885827
- type: nauc_ndcg_at_100_diff1
value: 41.36609192671821
- type: nauc_ndcg_at_100_max
value: 32.1429966230732
- type: nauc_ndcg_at_100_std
value: 17.635443742312578
- type: nauc_ndcg_at_10_diff1
value: 40.16969739206176
- type: nauc_ndcg_at_10_max
value: 30.655050133517907
- type: nauc_ndcg_at_10_std
value: 15.31416270805731
- type: nauc_ndcg_at_1_diff1
value: 50.062677615419304
- type: nauc_ndcg_at_1_max
value: 33.35584104516006
- type: nauc_ndcg_at_1_std
value: 11.42115012466949
- type: nauc_ndcg_at_20_diff1
value: 40.65149703452073
- type: nauc_ndcg_at_20_max
value: 31.49158572383702
- type: nauc_ndcg_at_20_std
value: 16.515600802503588
- type: nauc_ndcg_at_3_diff1
value: 40.978434285347326
- type: nauc_ndcg_at_3_max
value: 30.152983643295965
- type: nauc_ndcg_at_3_std
value: 12.216265569919356
- type: nauc_ndcg_at_5_diff1
value: 41.08935148839345
- type: nauc_ndcg_at_5_max
value: 30.270289469266555
- type: nauc_ndcg_at_5_std
value: 13.872257416203936
- type: nauc_precision_at_1000_diff1
value: -23.49105492946047
- type: nauc_precision_at_1000_max
value: -14.82348334333618
- type: nauc_precision_at_1000_std
value: 25.58547404406785
- type: nauc_precision_at_100_diff1
value: -7.981292902854982
- type: nauc_precision_at_100_max
value: 0.3216310748533712
- type: nauc_precision_at_100_std
value: 30.619279987080606
- type: nauc_precision_at_10_diff1
value: 16.699669745243195
- type: nauc_precision_at_10_max
value: 24.848221992404866
- type: nauc_precision_at_10_std
value: 25.483080484054128
- type: nauc_precision_at_1_diff1
value: 50.062677615419304
- type: nauc_precision_at_1_max
value: 33.35584104516006
- type: nauc_precision_at_1_std
value: 11.42115012466949
- type: nauc_precision_at_20_diff1
value: 9.661364668172661
- type: nauc_precision_at_20_max
value: 18.15490912668976
- type: nauc_precision_at_20_std
value: 28.942530404656207
- type: nauc_precision_at_3_diff1
value: 28.173149805964336
- type: nauc_precision_at_3_max
value: 29.125517533363045
- type: nauc_precision_at_3_std
value: 16.440247682256874
- type: nauc_precision_at_5_diff1
value: 26.337016666473417
- type: nauc_precision_at_5_max
value: 27.91482399852503
- type: nauc_precision_at_5_std
value: 20.584790906600297
- type: nauc_recall_at_1000_diff1
value: 25.27962582492483
- type: nauc_recall_at_1000_max
value: 53.4157087239144
- type: nauc_recall_at_1000_std
value: 64.84320824589436
- type: nauc_recall_at_100_diff1
value: 32.52503833916644
- type: nauc_recall_at_100_max
value: 34.43578471306039
- type: nauc_recall_at_100_std
value: 37.12451201750556
- type: nauc_recall_at_10_diff1
value: 30.854734920106758
- type: nauc_recall_at_10_max
value: 27.70071769548424
- type: nauc_recall_at_10_std
value: 18.679668303532377
- type: nauc_recall_at_1_diff1
value: 51.34182005448936
- type: nauc_recall_at_1_max
value: 29.964954096304396
- type: nauc_recall_at_1_std
value: 7.929661027160745
- type: nauc_recall_at_20_diff1
value: 31.67584335957749
- type: nauc_recall_at_20_max
value: 30.819782365046017
- type: nauc_recall_at_20_std
value: 24.91327729486532
- type: nauc_recall_at_3_diff1
value: 34.07385889318035
- type: nauc_recall_at_3_max
value: 26.55094252259986
- type: nauc_recall_at_3_std
value: 10.867282036873508
- type: nauc_recall_at_5_diff1
value: 33.23389303702456
- type: nauc_recall_at_5_max
value: 26.993134299145368
- type: nauc_recall_at_5_std
value: 14.066236376235505
- type: ndcg_at_1
value: 27.866000000000003
- type: ndcg_at_10
value: 40.083999999999996
- type: ndcg_at_100
value: 46.267
- type: ndcg_at_1000
value: 48.701
- type: ndcg_at_20
value: 42.34
- type: ndcg_at_3
value: 34.583999999999996
- type: ndcg_at_5
value: 37.264
- type: precision_at_1
value: 27.866000000000003
- type: precision_at_10
value: 7.707999999999999
- type: precision_at_100
value: 1.569
- type: precision_at_1000
value: 0.247
- type: precision_at_20
value: 4.852
- type: precision_at_3
value: 16.337
- type: precision_at_5
value: 12.055
- type: recall_at_1
value: 23.916
- type: recall_at_10
value: 52.903
- type: recall_at_100
value: 79.777
- type: recall_at_1000
value: 94.72
- type: recall_at_20
value: 61.312
- type: recall_at_3
value: 37.711
- type: recall_at_5
value: 44.603
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackWordpressRetrieval
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
split: test
type: mteb/cqadupstack-wordpress
metrics:
- type: main_score
value: 29.973
- type: map_at_1
value: 17.23
- type: map_at_10
value: 25.097
- type: map_at_100
value: 26.264
- type: map_at_1000
value: 26.369
- type: map_at_20
value: 25.796000000000003
- type: map_at_3
value: 22.487
- type: map_at_5
value: 23.977999999999998
- type: mrr_at_1
value: 19.038817005545287
- type: mrr_at_10
value: 27.142857142857142
- type: mrr_at_100
value: 28.18400454920016
- type: mrr_at_1000
value: 28.261243392575775
- type: mrr_at_20
value: 27.772863922037278
- type: mrr_at_3
value: 24.645717806531106
- type: mrr_at_5
value: 26.05052372150338
- type: nauc_map_at_1000_diff1
value: 30.011176313385096
- type: nauc_map_at_1000_max
value: 30.68572568741437
- type: nauc_map_at_1000_std
value: 6.891720154828985
- type: nauc_map_at_100_diff1
value: 30.0320356281249
- type: nauc_map_at_100_max
value: 30.721766519272826
- type: nauc_map_at_100_std
value: 6.887771590804904
- type: nauc_map_at_10_diff1
value: 30.18218019028145
- type: nauc_map_at_10_max
value: 30.676695850605086
- type: nauc_map_at_10_std
value: 6.35931077390129
- type: nauc_map_at_1_diff1
value: 36.65613803562446
- type: nauc_map_at_1_max
value: 34.41372061280891
- type: nauc_map_at_1_std
value: 5.643263116945109
- type: nauc_map_at_20_diff1
value: 29.97325431788116
- type: nauc_map_at_20_max
value: 30.6373674319881
- type: nauc_map_at_20_std
value: 6.627175965630369
- type: nauc_map_at_3_diff1
value: 30.86131371052504
- type: nauc_map_at_3_max
value: 31.15523969829247
- type: nauc_map_at_3_std
value: 5.567555712000783
- type: nauc_map_at_5_diff1
value: 30.848087000113118
- type: nauc_map_at_5_max
value: 31.459896541460697
- type: nauc_map_at_5_std
value: 5.518271061275222
- type: nauc_mrr_at_1000_diff1
value: 29.453047003985
- type: nauc_mrr_at_1000_max
value: 30.19882876836656
- type: nauc_mrr_at_1000_std
value: 6.626130218002384
- type: nauc_mrr_at_100_diff1
value: 29.44273618213682
- type: nauc_mrr_at_100_max
value: 30.20792006793222
- type: nauc_mrr_at_100_std
value: 6.6326270055928225
- type: nauc_mrr_at_10_diff1
value: 29.481500991416937
- type: nauc_mrr_at_10_max
value: 30.166282832131248
- type: nauc_mrr_at_10_std
value: 6.194497427521731
- type: nauc_mrr_at_1_diff1
value: 35.00165816992082
- type: nauc_mrr_at_1_max
value: 33.779777100720864
- type: nauc_mrr_at_1_std
value: 5.621116520393843
- type: nauc_mrr_at_20_diff1
value: 29.420661046476237
- type: nauc_mrr_at_20_max
value: 30.096026694199697
- type: nauc_mrr_at_20_std
value: 6.418490136892468
- type: nauc_mrr_at_3_diff1
value: 30.15562602647593
- type: nauc_mrr_at_3_max
value: 31.24169802519362
- type: nauc_mrr_at_3_std
value: 5.292177214159827
- type: nauc_mrr_at_5_diff1
value: 29.85812493582057
- type: nauc_mrr_at_5_max
value: 30.84309432039849
- type: nauc_mrr_at_5_std
value: 5.17373327205622
- type: nauc_ndcg_at_1000_diff1
value: 27.06661442385399
- type: nauc_ndcg_at_1000_max
value: 28.96911571800487
- type: nauc_ndcg_at_1000_std
value: 10.418806432871733
- type: nauc_ndcg_at_100_diff1
value: 27.146281316839314
- type: nauc_ndcg_at_100_max
value: 29.044799456854186
- type: nauc_ndcg_at_100_std
value: 10.508336096486618
- type: nauc_ndcg_at_10_diff1
value: 27.420874599878342
- type: nauc_ndcg_at_10_max
value: 28.714090994664755
- type: nauc_ndcg_at_10_std
value: 7.652695188853375
- type: nauc_ndcg_at_1_diff1
value: 35.00165816992082
- type: nauc_ndcg_at_1_max
value: 33.779777100720864
- type: nauc_ndcg_at_1_std
value: 5.621116520393843
- type: nauc_ndcg_at_20_diff1
value: 26.854270351760974
- type: nauc_ndcg_at_20_max
value: 28.52303486745037
- type: nauc_ndcg_at_20_std
value: 8.34449264443146
- type: nauc_ndcg_at_3_diff1
value: 28.683665095071454
- type: nauc_ndcg_at_3_max
value: 30.21167815580974
- type: nauc_ndcg_at_3_std
value: 5.57510161196495
- type: nauc_ndcg_at_5_diff1
value: 28.568200018893215
- type: nauc_ndcg_at_5_max
value: 30.268878618614377
- type: nauc_ndcg_at_5_std
value: 5.561108887007736
- type: nauc_precision_at_1000_diff1
value: -15.949370649937453
- type: nauc_precision_at_1000_max
value: -12.55230242997234
- type: nauc_precision_at_1000_std
value: 7.964001054475982
- type: nauc_precision_at_100_diff1
value: 3.8015059641621365
- type: nauc_precision_at_100_max
value: 9.502394070121735
- type: nauc_precision_at_100_std
value: 17.651392778848304
- type: nauc_precision_at_10_diff1
value: 18.272370317932598
- type: nauc_precision_at_10_max
value: 22.250936689177696
- type: nauc_precision_at_10_std
value: 11.326091089478126
- type: nauc_precision_at_1_diff1
value: 35.00165816992082
- type: nauc_precision_at_1_max
value: 33.779777100720864
- type: nauc_precision_at_1_std
value: 5.621116520393843
- type: nauc_precision_at_20_diff1
value: 14.701205402696422
- type: nauc_precision_at_20_max
value: 19.479826509253293
- type: nauc_precision_at_20_std
value: 11.944454432741377
- type: nauc_precision_at_3_diff1
value: 24.240226319020405
- type: nauc_precision_at_3_max
value: 28.68870471669554
- type: nauc_precision_at_3_std
value: 6.574024673506498
- type: nauc_precision_at_5_diff1
value: 23.17004836875319
- type: nauc_precision_at_5_max
value: 28.191016385192867
- type: nauc_precision_at_5_std
value: 6.514807345015352
- type: nauc_recall_at_1000_diff1
value: 3.893631175061775
- type: nauc_recall_at_1000_max
value: 19.271373005950228
- type: nauc_recall_at_1000_std
value: 45.08461198752793
- type: nauc_recall_at_100_diff1
value: 16.56155043674209
- type: nauc_recall_at_100_max
value: 22.519466525026544
- type: nauc_recall_at_100_std
value: 27.062281302347973
- type: nauc_recall_at_10_diff1
value: 19.666472561806202
- type: nauc_recall_at_10_max
value: 22.619769621626244
- type: nauc_recall_at_10_std
value: 11.00062407965151
- type: nauc_recall_at_1_diff1
value: 36.65613803562446
- type: nauc_recall_at_1_max
value: 34.41372061280891
- type: nauc_recall_at_1_std
value: 5.643263116945109
- type: nauc_recall_at_20_diff1
value: 16.971894573394206
- type: nauc_recall_at_20_max
value: 21.44001516902887
- type: nauc_recall_at_20_std
value: 13.106111366241002
- type: nauc_recall_at_3_diff1
value: 23.337485705564454
- type: nauc_recall_at_3_max
value: 26.926134944792864
- type: nauc_recall_at_3_std
value: 6.142956932796485
- type: nauc_recall_at_5_diff1
value: 23.052394072882375
- type: nauc_recall_at_5_max
value: 27.026444224445406
- type: nauc_recall_at_5_std
value: 5.735439526218693
- type: ndcg_at_1
value: 19.039
- type: ndcg_at_10
value: 29.973
- type: ndcg_at_100
value: 35.538
- type: ndcg_at_1000
value: 38.196999999999996
- type: ndcg_at_20
value: 32.352
- type: ndcg_at_3
value: 24.89
- type: ndcg_at_5
value: 27.427
- type: precision_at_1
value: 19.039
- type: precision_at_10
value: 5.009
- type: precision_at_100
value: 0.843
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_20
value: 3.05
- type: precision_at_3
value: 10.906
- type: precision_at_5
value: 8.059
- type: recall_at_1
value: 17.23
- type: recall_at_10
value: 42.886
- type: recall_at_100
value: 68.309
- type: recall_at_1000
value: 88.263
- type: recall_at_20
value: 52.039
- type: recall_at_3
value: 29.559
- type: recall_at_5
value: 35.49
task:
type: Retrieval
- dataset:
config: default
name: MTEB ClimateFEVER
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
split: test
type: mteb/climate-fever
metrics:
- type: main_score
value: 23.837
- type: map_at_1
value: 9.273000000000001
- type: map_at_10
value: 16.492
- type: map_at_100
value: 18.236
- type: map_at_1000
value: 18.423000000000002
- type: map_at_20
value: 17.395
- type: map_at_3
value: 13.533999999999999
- type: map_at_5
value: 15.012
- type: mrr_at_1
value: 20.521172638436482
- type: mrr_at_10
value: 31.445194147148513
- type: mrr_at_100
value: 32.61483800330437
- type: mrr_at_1000
value: 32.664329496819654
- type: mrr_at_20
value: 32.17294028439281
- type: mrr_at_3
value: 28.013029315960875
- type: mrr_at_5
value: 29.863192182410366
- type: nauc_map_at_1000_diff1
value: 18.01650517854619
- type: nauc_map_at_1000_max
value: 35.00555751580392
- type: nauc_map_at_1000_std
value: 9.786398826312832
- type: nauc_map_at_100_diff1
value: 18.03109714032031
- type: nauc_map_at_100_max
value: 35.02293343117481
- type: nauc_map_at_100_std
value: 9.71347278319927
- type: nauc_map_at_10_diff1
value: 18.123527260203605
- type: nauc_map_at_10_max
value: 34.59737571933771
- type: nauc_map_at_10_std
value: 7.985244526477989
- type: nauc_map_at_1_diff1
value: 24.20116611812359
- type: nauc_map_at_1_max
value: 30.142503127773175
- type: nauc_map_at_1_std
value: 1.7528279249714371
- type: nauc_map_at_20_diff1
value: 17.92941292676869
- type: nauc_map_at_20_max
value: 34.90561535201822
- type: nauc_map_at_20_std
value: 8.806983271002245
- type: nauc_map_at_3_diff1
value: 19.621278199026627
- type: nauc_map_at_3_max
value: 33.04953696007031
- type: nauc_map_at_3_std
value: 4.743775272044947
- type: nauc_map_at_5_diff1
value: 17.84852616035865
- type: nauc_map_at_5_max
value: 33.918937290902676
- type: nauc_map_at_5_std
value: 6.43805539088188
- type: nauc_mrr_at_1000_diff1
value: 15.347525245361156
- type: nauc_mrr_at_1000_max
value: 30.984286548888416
- type: nauc_mrr_at_1000_std
value: 10.51729403704548
- type: nauc_mrr_at_100_diff1
value: 15.35190671644279
- type: nauc_mrr_at_100_max
value: 30.991582390051992
- type: nauc_mrr_at_100_std
value: 10.528113181960542
- type: nauc_mrr_at_10_diff1
value: 15.421448994451428
- type: nauc_mrr_at_10_max
value: 31.13167396372901
- type: nauc_mrr_at_10_std
value: 10.405474460265241
- type: nauc_mrr_at_1_diff1
value: 19.91098871041916
- type: nauc_mrr_at_1_max
value: 28.199940386873457
- type: nauc_mrr_at_1_std
value: 5.155228094170121
- type: nauc_mrr_at_20_diff1
value: 15.299643109767583
- type: nauc_mrr_at_20_max
value: 31.01811956006181
- type: nauc_mrr_at_20_std
value: 10.489072164322263
- type: nauc_mrr_at_3_diff1
value: 15.366450166527843
- type: nauc_mrr_at_3_max
value: 30.34857432681673
- type: nauc_mrr_at_3_std
value: 9.006900103817772
- type: nauc_mrr_at_5_diff1
value: 14.887486492755764
- type: nauc_mrr_at_5_max
value: 31.064197475112508
- type: nauc_mrr_at_5_std
value: 10.031368604363431
- type: nauc_ndcg_at_1000_diff1
value: 15.488355020463965
- type: nauc_ndcg_at_1000_max
value: 35.599964683193356
- type: nauc_ndcg_at_1000_std
value: 17.060985301144974
- type: nauc_ndcg_at_100_diff1
value: 15.854159478255767
- type: nauc_ndcg_at_100_max
value: 35.68620327215392
- type: nauc_ndcg_at_100_std
value: 16.291640368302122
- type: nauc_ndcg_at_10_diff1
value: 16.078556057593055
- type: nauc_ndcg_at_10_max
value: 35.16683300045305
- type: nauc_ndcg_at_10_std
value: 11.600026114771842
- type: nauc_ndcg_at_1_diff1
value: 19.91098871041916
- type: nauc_ndcg_at_1_max
value: 28.199940386873457
- type: nauc_ndcg_at_1_std
value: 5.155228094170121
- type: nauc_ndcg_at_20_diff1
value: 15.488844425483514
- type: nauc_ndcg_at_20_max
value: 35.56107040983233
- type: nauc_ndcg_at_20_std
value: 13.251910512661198
- type: nauc_ndcg_at_3_diff1
value: 16.74489883121594
- type: nauc_ndcg_at_3_max
value: 32.389819879059544
- type: nauc_ndcg_at_3_std
value: 7.493628842692248
- type: nauc_ndcg_at_5_diff1
value: 15.113032176867607
- type: nauc_ndcg_at_5_max
value: 34.3779074616743
- type: nauc_ndcg_at_5_std
value: 9.451124063087098
- type: nauc_precision_at_1000_diff1
value: -3.0336791429010397
- type: nauc_precision_at_1000_max
value: 7.186757791081503
- type: nauc_precision_at_1000_std
value: 24.207475517567993
- type: nauc_precision_at_100_diff1
value: 4.1799378860106025
- type: nauc_precision_at_100_max
value: 19.734149092069195
- type: nauc_precision_at_100_std
value: 27.14752823725515
- type: nauc_precision_at_10_diff1
value: 9.757385921354574
- type: nauc_precision_at_10_max
value: 31.63967138734393
- type: nauc_precision_at_10_std
value: 20.941862722792937
- type: nauc_precision_at_1_diff1
value: 19.91098871041916
- type: nauc_precision_at_1_max
value: 28.199940386873457
- type: nauc_precision_at_1_std
value: 5.155228094170121
- type: nauc_precision_at_20_diff1
value: 6.041242795339366
- type: nauc_precision_at_20_max
value: 28.346626059960002
- type: nauc_precision_at_20_std
value: 23.557255218471095
- type: nauc_precision_at_3_diff1
value: 12.29833478679591
- type: nauc_precision_at_3_max
value: 32.28472659370561
- type: nauc_precision_at_3_std
value: 12.302338064297853
- type: nauc_precision_at_5_diff1
value: 7.992994907910815
- type: nauc_precision_at_5_max
value: 32.957822083112525
- type: nauc_precision_at_5_std
value: 17.171509203185707
- type: nauc_recall_at_1000_diff1
value: 6.546403888329451
- type: nauc_recall_at_1000_max
value: 30.05169708532201
- type: nauc_recall_at_1000_std
value: 33.1025025789684
- type: nauc_recall_at_100_diff1
value: 10.063690002072539
- type: nauc_recall_at_100_max
value: 30.33645832268982
- type: nauc_recall_at_100_std
value: 24.88750198752349
- type: nauc_recall_at_10_diff1
value: 11.557048975359223
- type: nauc_recall_at_10_max
value: 32.570077522651765
- type: nauc_recall_at_10_std
value: 13.351992240284844
- type: nauc_recall_at_1_diff1
value: 24.20116611812359
- type: nauc_recall_at_1_max
value: 30.142503127773175
- type: nauc_recall_at_1_std
value: 1.7528279249714371
- type: nauc_recall_at_20_diff1
value: 10.023860910712143
- type: nauc_recall_at_20_max
value: 31.966797882093502
- type: nauc_recall_at_20_std
value: 16.292044481984295
- type: nauc_recall_at_3_diff1
value: 14.118820470249613
- type: nauc_recall_at_3_max
value: 32.864946121706126
- type: nauc_recall_at_3_std
value: 7.699657726962808
- type: nauc_recall_at_5_diff1
value: 10.13729414622558
- type: nauc_recall_at_5_max
value: 33.482336846118045
- type: nauc_recall_at_5_std
value: 10.497701399887017
- type: ndcg_at_1
value: 20.521
- type: ndcg_at_10
value: 23.837
- type: ndcg_at_100
value: 31.278
- type: ndcg_at_1000
value: 34.852
- type: ndcg_at_20
value: 26.653
- type: ndcg_at_3
value: 18.778
- type: ndcg_at_5
value: 20.535999999999998
- type: precision_at_1
value: 20.521
- type: precision_at_10
value: 7.582999999999999
- type: precision_at_100
value: 1.545
- type: precision_at_1000
value: 0.22100000000000003
- type: precision_at_20
value: 4.974
- type: precision_at_3
value: 13.941
- type: precision_at_5
value: 10.866000000000001
- type: recall_at_1
value: 9.273000000000001
- type: recall_at_10
value: 29.961
- type: recall_at_100
value: 55.855999999999995
- type: recall_at_1000
value: 75.972
- type: recall_at_20
value: 38.045
- type: recall_at_3
value: 17.666
- type: recall_at_5
value: 22.539
task:
type: Retrieval
- dataset:
config: default
name: MTEB DBPedia
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
split: test
type: mteb/dbpedia
metrics:
- type: main_score
value: 36.77
- type: map_at_1
value: 7.879
- type: map_at_10
value: 17.448
- type: map_at_100
value: 24.615000000000002
- type: map_at_1000
value: 26.32
- type: map_at_20
value: 20.115
- type: map_at_3
value: 12.607
- type: map_at_5
value: 14.757000000000001
- type: mrr_at_1
value: 60.75000000000001
- type: mrr_at_10
value: 70.04662698412697
- type: mrr_at_100
value: 70.48374853904342
- type: mrr_at_1000
value: 70.48974912019449
- type: mrr_at_20
value: 70.35767608079914
- type: mrr_at_3
value: 68.04166666666667
- type: mrr_at_5
value: 69.39166666666667
- type: nauc_map_at_1000_diff1
value: 27.22615670354747
- type: nauc_map_at_1000_max
value: 23.117379231999642
- type: nauc_map_at_1000_std
value: 15.590590587924877
- type: nauc_map_at_100_diff1
value: 27.637835658039805
- type: nauc_map_at_100_max
value: 19.836155567809612
- type: nauc_map_at_100_std
value: 11.900741405093758
- type: nauc_map_at_10_diff1
value: 30.728419563300395
- type: nauc_map_at_10_max
value: 6.106892959682543
- type: nauc_map_at_10_std
value: -9.11825174887402
- type: nauc_map_at_1_diff1
value: 39.34845843129211
- type: nauc_map_at_1_max
value: -2.7536297297258354
- type: nauc_map_at_1_std
value: -21.149652784081006
- type: nauc_map_at_20_diff1
value: 29.37489889563395
- type: nauc_map_at_20_max
value: 11.068671486589691
- type: nauc_map_at_20_std
value: -1.9024556480454369
- type: nauc_map_at_3_diff1
value: 35.58217022040217
- type: nauc_map_at_3_max
value: 0.2641479206106634
- type: nauc_map_at_3_std
value: -17.942087104722955
- type: nauc_map_at_5_diff1
value: 32.07485536680787
- type: nauc_map_at_5_max
value: 2.132142953478948
- type: nauc_map_at_5_std
value: -13.959336639125317
- type: nauc_mrr_at_1000_diff1
value: 43.561171278954866
- type: nauc_mrr_at_1000_max
value: 46.86561252904832
- type: nauc_mrr_at_1000_std
value: 25.189090212812044
- type: nauc_mrr_at_100_diff1
value: 43.56311857767914
- type: nauc_mrr_at_100_max
value: 46.87364039655639
- type: nauc_mrr_at_100_std
value: 25.20188703419532
- type: nauc_mrr_at_10_diff1
value: 43.554694361118905
- type: nauc_mrr_at_10_max
value: 46.728242258941464
- type: nauc_mrr_at_10_std
value: 25.25356257708155
- type: nauc_mrr_at_1_diff1
value: 46.435352817539524
- type: nauc_mrr_at_1_max
value: 46.0413071187664
- type: nauc_mrr_at_1_std
value: 20.350129155245682
- type: nauc_mrr_at_20_diff1
value: 43.544595900767
- type: nauc_mrr_at_20_max
value: 46.93717450668172
- type: nauc_mrr_at_20_std
value: 25.25597416021791
- type: nauc_mrr_at_3_diff1
value: 42.553383214077115
- type: nauc_mrr_at_3_max
value: 46.56975257676068
- type: nauc_mrr_at_3_std
value: 24.70327599709596
- type: nauc_mrr_at_5_diff1
value: 43.33215737862213
- type: nauc_mrr_at_5_max
value: 46.97620970583296
- type: nauc_mrr_at_5_std
value: 25.529521260210203
- type: nauc_ndcg_at_1000_diff1
value: 27.589730901498775
- type: nauc_ndcg_at_1000_max
value: 34.18730626989723
- type: nauc_ndcg_at_1000_std
value: 27.79208958504551
- type: nauc_ndcg_at_100_diff1
value: 28.099956032480257
- type: nauc_ndcg_at_100_max
value: 25.076317763406653
- type: nauc_ndcg_at_100_std
value: 19.3393302641812
- type: nauc_ndcg_at_10_diff1
value: 28.10040050055288
- type: nauc_ndcg_at_10_max
value: 27.463719470301168
- type: nauc_ndcg_at_10_std
value: 13.569605959220086
- type: nauc_ndcg_at_1_diff1
value: 39.92817671769714
- type: nauc_ndcg_at_1_max
value: 34.44662945106997
- type: nauc_ndcg_at_1_std
value: 13.388099467140332
- type: nauc_ndcg_at_20_diff1
value: 27.800968512396306
- type: nauc_ndcg_at_20_max
value: 23.78719275004937
- type: nauc_ndcg_at_20_std
value: 11.933811285502157
- type: nauc_ndcg_at_3_diff1
value: 30.362495467731133
- type: nauc_ndcg_at_3_max
value: 31.470527935112507
- type: nauc_ndcg_at_3_std
value: 13.5264322754454
- type: nauc_ndcg_at_5_diff1
value: 27.596193051135042
- type: nauc_ndcg_at_5_max
value: 28.879553439188545
- type: nauc_ndcg_at_5_std
value: 14.002675908790085
- type: nauc_precision_at_1000_diff1
value: -5.902001497187656
- type: nauc_precision_at_1000_max
value: 31.506103503010614
- type: nauc_precision_at_1000_std
value: 30.37757126360957
- type: nauc_precision_at_100_diff1
value: -7.078812736371486
- type: nauc_precision_at_100_max
value: 40.0935402905799
- type: nauc_precision_at_100_std
value: 48.350060964069996
- type: nauc_precision_at_10_diff1
value: 2.9397070998315495
- type: nauc_precision_at_10_max
value: 41.427281680892975
- type: nauc_precision_at_10_std
value: 41.568474216601494
- type: nauc_precision_at_1_diff1
value: 46.435352817539524
- type: nauc_precision_at_1_max
value: 46.0413071187664
- type: nauc_precision_at_1_std
value: 20.350129155245682
- type: nauc_precision_at_20_diff1
value: -0.5003867750646896
- type: nauc_precision_at_20_max
value: 43.11320479268452
- type: nauc_precision_at_20_std
value: 46.31414266215817
- type: nauc_precision_at_3_diff1
value: 16.843701906002153
- type: nauc_precision_at_3_max
value: 39.14348289333492
- type: nauc_precision_at_3_std
value: 28.97286018704868
- type: nauc_precision_at_5_diff1
value: 7.4678851421555255
- type: nauc_precision_at_5_max
value: 39.44725843015022
- type: nauc_precision_at_5_std
value: 36.07126271213125
- type: nauc_recall_at_1000_diff1
value: 12.918659968294232
- type: nauc_recall_at_1000_max
value: 18.912793350749517
- type: nauc_recall_at_1000_std
value: 34.58765147591728
- type: nauc_recall_at_100_diff1
value: 17.75168890570515
- type: nauc_recall_at_100_max
value: 9.431103175972714
- type: nauc_recall_at_100_std
value: 18.236704585602688
- type: nauc_recall_at_10_diff1
value: 22.428401923490217
- type: nauc_recall_at_10_max
value: -2.0581844217543095
- type: nauc_recall_at_10_std
value: -12.095753965206086
- type: nauc_recall_at_1_diff1
value: 39.34845843129211
- type: nauc_recall_at_1_max
value: -2.7536297297258354
- type: nauc_recall_at_1_std
value: -21.149652784081006
- type: nauc_recall_at_20_diff1
value: 19.029969489215137
- type: nauc_recall_at_20_max
value: 0.4313311185111767
- type: nauc_recall_at_20_std
value: -4.001252650460747
- type: nauc_recall_at_3_diff1
value: 32.40881022483858
- type: nauc_recall_at_3_max
value: -2.2448786906703293
- type: nauc_recall_at_3_std
value: -18.736548322855686
- type: nauc_recall_at_5_diff1
value: 25.908532046267744
- type: nauc_recall_at_5_max
value: -2.4645406246201174
- type: nauc_recall_at_5_std
value: -14.819488134588758
- type: ndcg_at_1
value: 47.25
- type: ndcg_at_10
value: 36.77
- type: ndcg_at_100
value: 42.33
- type: ndcg_at_1000
value: 50.382000000000005
- type: ndcg_at_20
value: 36.51
- type: ndcg_at_3
value: 40.128
- type: ndcg_at_5
value: 38.031
- type: precision_at_1
value: 60.75000000000001
- type: precision_at_10
value: 29.549999999999997
- type: precision_at_100
value: 9.62
- type: precision_at_1000
value: 2.0580000000000003
- type: precision_at_20
value: 22.125
- type: precision_at_3
value: 44.833
- type: precision_at_5
value: 38.25
- type: recall_at_1
value: 7.879
- type: recall_at_10
value: 23.783
- type: recall_at_100
value: 51.193
- type: recall_at_1000
value: 75.995
- type: recall_at_20
value: 31.05
- type: recall_at_3
value: 14.16
- type: recall_at_5
value: 17.727
task:
type: Retrieval
- dataset:
config: default
name: MTEB EmotionClassification
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
split: test
type: mteb/emotion
metrics:
- type: accuracy
value: 53.94500000000001
- type: f1
value: 46.74955162106079
- type: f1_weighted
value: 55.44564710432288
- type: main_score
value: 53.94500000000001
task:
type: Classification
- dataset:
config: default
name: MTEB FEVER
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
split: test
type: mteb/fever
metrics:
- type: main_score
value: 84.127
- type: map_at_1
value: 69.831
- type: map_at_10
value: 79.589
- type: map_at_100
value: 79.77499999999999
- type: map_at_1000
value: 79.788
- type: map_at_20
value: 79.706
- type: map_at_3
value: 78.42099999999999
- type: map_at_5
value: 79.239
- type: mrr_at_1
value: 75.05250525052504
- type: mrr_at_10
value: 84.58987565423183
- type: mrr_at_100
value: 84.65851795351881
- type: mrr_at_1000
value: 84.65972718436838
- type: mrr_at_20
value: 84.64833916172947
- type: mrr_at_3
value: 83.64336433643345
- type: mrr_at_5
value: 84.34018401840153
- type: nauc_map_at_1000_diff1
value: 49.23229610937116
- type: nauc_map_at_1000_max
value: 2.538940503744293
- type: nauc_map_at_1000_std
value: -28.281666551373885
- type: nauc_map_at_100_diff1
value: 49.206748439493715
- type: nauc_map_at_100_max
value: 2.5306616051352426
- type: nauc_map_at_100_std
value: -28.278850840258357
- type: nauc_map_at_10_diff1
value: 49.09546754806344
- type: nauc_map_at_10_max
value: 2.6113492488760803
- type: nauc_map_at_10_std
value: -28.33173942793787
- type: nauc_map_at_1_diff1
value: 54.03823845678141
- type: nauc_map_at_1_max
value: 0.7813055695400233
- type: nauc_map_at_1_std
value: -29.69082254949428
- type: nauc_map_at_20_diff1
value: 49.13309291472015
- type: nauc_map_at_20_max
value: 2.527699255933495
- type: nauc_map_at_20_std
value: -28.273378648376767
- type: nauc_map_at_3_diff1
value: 49.16418319489923
- type: nauc_map_at_3_max
value: 2.4530562838038668
- type: nauc_map_at_3_std
value: -29.749466711737117
- type: nauc_map_at_5_diff1
value: 49.105002115323174
- type: nauc_map_at_5_max
value: 2.730159330614642
- type: nauc_map_at_5_std
value: -28.624757813540224
- type: nauc_mrr_at_1000_diff1
value: 63.27335919243411
- type: nauc_mrr_at_1000_max
value: 4.374350066360141
- type: nauc_mrr_at_1000_std
value: -39.057765474275875
- type: nauc_mrr_at_100_diff1
value: 63.27201389539822
- type: nauc_mrr_at_100_max
value: 4.380072421865697
- type: nauc_mrr_at_100_std
value: -39.05368757884141
- type: nauc_mrr_at_10_diff1
value: 63.24639295001365
- type: nauc_mrr_at_10_max
value: 4.512012375528155
- type: nauc_mrr_at_10_std
value: -39.12854460658675
- type: nauc_mrr_at_1_diff1
value: 65.10605165757288
- type: nauc_mrr_at_1_max
value: 1.9283900321068632
- type: nauc_mrr_at_1_std
value: -36.73128263177301
- type: nauc_mrr_at_20_diff1
value: 63.25714175532876
- type: nauc_mrr_at_20_max
value: 4.401641881007041
- type: nauc_mrr_at_20_std
value: -39.06295724502164
- type: nauc_mrr_at_3_diff1
value: 62.74870913078454
- type: nauc_mrr_at_3_max
value: 4.451662631818057
- type: nauc_mrr_at_3_std
value: -40.362052318194905
- type: nauc_mrr_at_5_diff1
value: 63.15462728579158
- type: nauc_mrr_at_5_max
value: 4.651205798352267
- type: nauc_mrr_at_5_std
value: -39.39561481114499
- type: nauc_ndcg_at_1000_diff1
value: 50.05516269906709
- type: nauc_ndcg_at_1000_max
value: 3.402171494055581
- type: nauc_ndcg_at_1000_std
value: -28.03925061760615
- type: nauc_ndcg_at_100_diff1
value: 49.3532420182713
- type: nauc_ndcg_at_100_max
value: 3.2254197563689253
- type: nauc_ndcg_at_100_std
value: -27.790242243156303
- type: nauc_ndcg_at_10_diff1
value: 48.83916695200456
- type: nauc_ndcg_at_10_max
value: 3.526631254510631
- type: nauc_ndcg_at_10_std
value: -28.107233038143935
- type: nauc_ndcg_at_1_diff1
value: 65.10605165757288
- type: nauc_ndcg_at_1_max
value: 1.9283900321068632
- type: nauc_ndcg_at_1_std
value: -36.73128263177301
- type: nauc_ndcg_at_20_diff1
value: 48.89391205041084
- type: nauc_ndcg_at_20_max
value: 3.193109099886884
- type: nauc_ndcg_at_20_std
value: -27.746898107657486
- type: nauc_ndcg_at_3_diff1
value: 49.700478041463256
- type: nauc_ndcg_at_3_max
value: 3.5597079593645837
- type: nauc_ndcg_at_3_std
value: -31.8276627401069
- type: nauc_ndcg_at_5_diff1
value: 49.13817289744641
- type: nauc_ndcg_at_5_max
value: 3.9842988788044162
- type: nauc_ndcg_at_5_std
value: -29.128133914203897
- type: nauc_precision_at_1000_diff1
value: -5.8168043702291445
- type: nauc_precision_at_1000_max
value: 8.661081932948386
- type: nauc_precision_at_1000_std
value: 7.898154314108613
- type: nauc_precision_at_100_diff1
value: -7.622708807398312
- type: nauc_precision_at_100_max
value: 7.573802349665375
- type: nauc_precision_at_100_std
value: 7.548940358658417
- type: nauc_precision_at_10_diff1
value: 3.651203107718887
- type: nauc_precision_at_10_max
value: 12.027476444641824
- type: nauc_precision_at_10_std
value: -3.8701414226488393
- type: nauc_precision_at_1_diff1
value: 65.10605165757288
- type: nauc_precision_at_1_max
value: 1.9283900321068632
- type: nauc_precision_at_1_std
value: -36.73128263177301
- type: nauc_precision_at_20_diff1
value: -4.51338283591896
- type: nauc_precision_at_20_max
value: 8.574478979483608
- type: nauc_precision_at_20_std
value: 3.8001684359605457
- type: nauc_precision_at_3_diff1
value: 35.12229883441577
- type: nauc_precision_at_3_max
value: 11.461666197502227
- type: nauc_precision_at_3_std
value: -34.430950046529375
- type: nauc_precision_at_5_diff1
value: 19.750032706257066
- type: nauc_precision_at_5_max
value: 15.700101161283891
- type: nauc_precision_at_5_std
value: -17.01470586200846
- type: nauc_recall_at_1000_diff1
value: 5.677803043632773
- type: nauc_recall_at_1000_max
value: 6.013417206823954
- type: nauc_recall_at_1000_std
value: 28.095710500813787
- type: nauc_recall_at_100_diff1
value: 6.062697689760903
- type: nauc_recall_at_100_max
value: 2.918708091666672
- type: nauc_recall_at_100_std
value: 15.009661326828391
- type: nauc_recall_at_10_diff1
value: 15.51901323813468
- type: nauc_recall_at_10_max
value: 5.695538162226332
- type: nauc_recall_at_10_std
value: -1.6573979540762098
- type: nauc_recall_at_1_diff1
value: 54.03823845678141
- type: nauc_recall_at_1_max
value: 0.7813055695400233
- type: nauc_recall_at_1_std
value: -29.69082254949428
- type: nauc_recall_at_20_diff1
value: 9.37823741228587
- type: nauc_recall_at_20_max
value: 3.0566017916814943
- type: nauc_recall_at_20_std
value: 6.9796184911386545
- type: nauc_recall_at_3_diff1
value: 32.07387343667272
- type: nauc_recall_at_3_max
value: 4.789923667382424
- type: nauc_recall_at_3_std
value: -24.74706115680205
- type: nauc_recall_at_5_diff1
value: 24.39694752709738
- type: nauc_recall_at_5_max
value: 7.271133287879929
- type: nauc_recall_at_5_std
value: -12.628276788882612
- type: ndcg_at_1
value: 75.053
- type: ndcg_at_10
value: 84.127
- type: ndcg_at_100
value: 84.77900000000001
- type: ndcg_at_1000
value: 85.028
- type: ndcg_at_20
value: 84.465
- type: ndcg_at_3
value: 82.179
- type: ndcg_at_5
value: 83.42399999999999
- type: precision_at_1
value: 75.053
- type: precision_at_10
value: 10.189
- type: precision_at_100
value: 1.068
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_20
value: 5.188000000000001
- type: precision_at_3
value: 31.813000000000002
- type: precision_at_5
value: 19.829
- type: recall_at_1
value: 69.831
- type: recall_at_10
value: 93.119
- type: recall_at_100
value: 95.649
- type: recall_at_1000
value: 97.245
- type: recall_at_20
value: 94.313
- type: recall_at_3
value: 87.787
- type: recall_at_5
value: 90.989
task:
type: Retrieval
- dataset:
config: default
name: MTEB FiQA2018
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
split: test
type: mteb/fiqa
metrics:
- type: main_score
value: 46.018
- type: map_at_1
value: 23.239
- type: map_at_10
value: 37.785000000000004
- type: map_at_100
value: 39.78
- type: map_at_1000
value: 39.947
- type: map_at_20
value: 38.873999999999995
- type: map_at_3
value: 32.686
- type: map_at_5
value: 35.725
- type: mrr_at_1
value: 45.21604938271605
- type: mrr_at_10
value: 53.81534146580441
- type: mrr_at_100
value: 54.57479873400386
- type: mrr_at_1000
value: 54.60767741375167
- type: mrr_at_20
value: 54.32374740680479
- type: mrr_at_3
value: 51.02880658436213
- type: mrr_at_5
value: 52.91152263374482
- type: nauc_map_at_1000_diff1
value: 37.39674307076189
- type: nauc_map_at_1000_max
value: 29.499416637029057
- type: nauc_map_at_1000_std
value: -3.159386284834724
- type: nauc_map_at_100_diff1
value: 37.38267938834233
- type: nauc_map_at_100_max
value: 29.450591895687317
- type: nauc_map_at_100_std
value: -3.189530866402903
- type: nauc_map_at_10_diff1
value: 37.202309092714685
- type: nauc_map_at_10_max
value: 27.98261677114554
- type: nauc_map_at_10_std
value: -4.0144873973773985
- type: nauc_map_at_1_diff1
value: 42.42289155172154
- type: nauc_map_at_1_max
value: 20.126387750613056
- type: nauc_map_at_1_std
value: -8.558059645904228
- type: nauc_map_at_20_diff1
value: 36.940935486049106
- type: nauc_map_at_20_max
value: 28.790226950120985
- type: nauc_map_at_20_std
value: -3.5487603793931752
- type: nauc_map_at_3_diff1
value: 38.447143857375835
- type: nauc_map_at_3_max
value: 23.92233021843042
- type: nauc_map_at_3_std
value: -7.139129825565484
- type: nauc_map_at_5_diff1
value: 38.516472169319144
- type: nauc_map_at_5_max
value: 26.413918646667977
- type: nauc_map_at_5_std
value: -5.636728555199194
- type: nauc_mrr_at_1000_diff1
value: 47.74750871610032
- type: nauc_mrr_at_1000_max
value: 40.19499238606483
- type: nauc_mrr_at_1000_std
value: 0.36032080608776107
- type: nauc_mrr_at_100_diff1
value: 47.73322151755956
- type: nauc_mrr_at_100_max
value: 40.20877044107413
- type: nauc_mrr_at_100_std
value: 0.3930328752369529
- type: nauc_mrr_at_10_diff1
value: 47.62649164813202
- type: nauc_mrr_at_10_max
value: 40.31590127628367
- type: nauc_mrr_at_10_std
value: 0.3376782526921225
- type: nauc_mrr_at_1_diff1
value: 50.71224023839513
- type: nauc_mrr_at_1_max
value: 38.12334760187021
- type: nauc_mrr_at_1_std
value: -3.744748522252006
- type: nauc_mrr_at_20_diff1
value: 47.65883289781366
- type: nauc_mrr_at_20_max
value: 40.19386589459899
- type: nauc_mrr_at_20_std
value: 0.3300453619949638
- type: nauc_mrr_at_3_diff1
value: 48.15037455271594
- type: nauc_mrr_at_3_max
value: 39.63517811079612
- type: nauc_mrr_at_3_std
value: -1.2604715431363336
- type: nauc_mrr_at_5_diff1
value: 47.82905935425148
- type: nauc_mrr_at_5_max
value: 40.14477449232483
- type: nauc_mrr_at_5_std
value: -0.6387351420113502
- type: nauc_ndcg_at_1000_diff1
value: 39.62042242051141
- type: nauc_ndcg_at_1000_max
value: 34.95065768372776
- type: nauc_ndcg_at_1000_std
value: 1.2093906933233651
- type: nauc_ndcg_at_100_diff1
value: 39.52715708377756
- type: nauc_ndcg_at_100_max
value: 34.8176627511724
- type: nauc_ndcg_at_100_std
value: 1.8417866916566914
- type: nauc_ndcg_at_10_diff1
value: 38.400363035149454
- type: nauc_ndcg_at_10_max
value: 31.63896107204925
- type: nauc_ndcg_at_10_std
value: -0.8705252027316186
- type: nauc_ndcg_at_1_diff1
value: 50.71224023839513
- type: nauc_ndcg_at_1_max
value: 38.12334760187021
- type: nauc_ndcg_at_1_std
value: -3.744748522252006
- type: nauc_ndcg_at_20_diff1
value: 38.12907512053514
- type: nauc_ndcg_at_20_max
value: 32.497748011049474
- type: nauc_ndcg_at_20_std
value: -0.1752936914305571
- type: nauc_ndcg_at_3_diff1
value: 39.46177721859432
- type: nauc_ndcg_at_3_max
value: 31.939511307389072
- type: nauc_ndcg_at_3_std
value: -3.0727677367802775
- type: nauc_ndcg_at_5_diff1
value: 39.58629354813809
- type: nauc_ndcg_at_5_max
value: 31.534911396228782
- type: nauc_ndcg_at_5_std
value: -2.8301665715597277
- type: nauc_precision_at_1000_diff1
value: -0.8786446062773204
- type: nauc_precision_at_1000_max
value: 29.25589660407707
- type: nauc_precision_at_1000_std
value: 17.455591524848746
- type: nauc_precision_at_100_diff1
value: 5.066275950497446
- type: nauc_precision_at_100_max
value: 35.90713282516485
- type: nauc_precision_at_100_std
value: 19.899761019511562
- type: nauc_precision_at_10_diff1
value: 14.251592016383505
- type: nauc_precision_at_10_max
value: 38.742155587347575
- type: nauc_precision_at_10_std
value: 14.243815134657725
- type: nauc_precision_at_1_diff1
value: 50.71224023839513
- type: nauc_precision_at_1_max
value: 38.12334760187021
- type: nauc_precision_at_1_std
value: -3.744748522252006
- type: nauc_precision_at_20_diff1
value: 9.33294574281467
- type: nauc_precision_at_20_max
value: 37.78712899843252
- type: nauc_precision_at_20_std
value: 15.69120289561787
- type: nauc_precision_at_3_diff1
value: 28.27816983802183
- type: nauc_precision_at_3_max
value: 36.45541405683364
- type: nauc_precision_at_3_std
value: 3.7608923567232626
- type: nauc_precision_at_5_diff1
value: 22.57043202085106
- type: nauc_precision_at_5_max
value: 39.101539898099766
- type: nauc_precision_at_5_std
value: 9.027858223250995
- type: nauc_recall_at_1000_diff1
value: 17.5612669956746
- type: nauc_recall_at_1000_max
value: 25.889529932227624
- type: nauc_recall_at_1000_std
value: 19.57316948655149
- type: nauc_recall_at_100_diff1
value: 28.46905271419406
- type: nauc_recall_at_100_max
value: 31.153388889792833
- type: nauc_recall_at_100_std
value: 17.27258409078373
- type: nauc_recall_at_10_diff1
value: 28.126929700808944
- type: nauc_recall_at_10_max
value: 23.181744909761907
- type: nauc_recall_at_10_std
value: 1.968185972587066
- type: nauc_recall_at_1_diff1
value: 42.42289155172154
- type: nauc_recall_at_1_max
value: 20.126387750613056
- type: nauc_recall_at_1_std
value: -8.558059645904228
- type: nauc_recall_at_20_diff1
value: 26.479542294303787
- type: nauc_recall_at_20_max
value: 24.732180999052623
- type: nauc_recall_at_20_std
value: 4.561070039093053
- type: nauc_recall_at_3_diff1
value: 33.630231249403565
- type: nauc_recall_at_3_max
value: 19.866536816100318
- type: nauc_recall_at_3_std
value: -6.902891630424277
- type: nauc_recall_at_5_diff1
value: 32.374300069152945
- type: nauc_recall_at_5_max
value: 21.609786350615863
- type: nauc_recall_at_5_std
value: -4.250570794176765
- type: ndcg_at_1
value: 45.216
- type: ndcg_at_10
value: 46.018
- type: ndcg_at_100
value: 52.81
- type: ndcg_at_1000
value: 55.437000000000005
- type: ndcg_at_20
value: 48.752
- type: ndcg_at_3
value: 41.143
- type: ndcg_at_5
value: 43.428
- type: precision_at_1
value: 45.216
- type: precision_at_10
value: 12.747
- type: precision_at_100
value: 1.9980000000000002
- type: precision_at_1000
value: 0.246
- type: precision_at_20
value: 7.523000000000001
- type: precision_at_3
value: 26.749000000000002
- type: precision_at_5
value: 20.617
- type: recall_at_1
value: 23.239
- type: recall_at_10
value: 53.64
- type: recall_at_100
value: 78.316
- type: recall_at_1000
value: 94.132
- type: recall_at_20
value: 62.17700000000001
- type: recall_at_3
value: 37.559
- type: recall_at_5
value: 45.605000000000004
task:
type: Retrieval
- dataset:
config: default
name: MTEB HotpotQA
revision: ab518f4d6fcca38d87c25209f94beba119d02014
split: test
type: mteb/hotpotqa
metrics:
- type: main_score
value: 67.836
- type: map_at_1
value: 38.292
- type: map_at_10
value: 58.48
- type: map_at_100
value: 59.382999999999996
- type: map_at_1000
value: 59.447
- type: map_at_20
value: 59.016999999999996
- type: map_at_3
value: 54.617000000000004
- type: map_at_5
value: 57.043
- type: mrr_at_1
value: 76.58338960162052
- type: mrr_at_10
value: 83.47652808591329
- type: mrr_at_100
value: 83.63380014525882
- type: mrr_at_1000
value: 83.63933777767011
- type: mrr_at_20
value: 83.57772328539731
- type: mrr_at_3
value: 82.44654512716605
- type: mrr_at_5
value: 83.17240603195998
- type: nauc_map_at_1000_diff1
value: 16.09417706349051
- type: nauc_map_at_1000_max
value: 22.82046255671306
- type: nauc_map_at_1000_std
value: -0.06797864025553367
- type: nauc_map_at_100_diff1
value: 16.05272819609321
- type: nauc_map_at_100_max
value: 22.80861981190222
- type: nauc_map_at_100_std
value: -0.05071783771856927
- type: nauc_map_at_10_diff1
value: 15.997779294340559
- type: nauc_map_at_10_max
value: 22.615988267544513
- type: nauc_map_at_10_std
value: -0.7600035230743971
- type: nauc_map_at_1_diff1
value: 69.24726718948668
- type: nauc_map_at_1_max
value: 43.958413687770644
- type: nauc_map_at_1_std
value: -12.056753426789658
- type: nauc_map_at_20_diff1
value: 15.939881445060319
- type: nauc_map_at_20_max
value: 22.692668502577643
- type: nauc_map_at_20_std
value: -0.283868450708954
- type: nauc_map_at_3_diff1
value: 18.213734472436414
- type: nauc_map_at_3_max
value: 23.0443805721617
- type: nauc_map_at_3_std
value: -3.327751624422928
- type: nauc_map_at_5_diff1
value: 16.680008500993083
- type: nauc_map_at_5_max
value: 22.517396255963348
- type: nauc_map_at_5_std
value: -1.98531389655906
- type: nauc_mrr_at_1000_diff1
value: 67.90848983786418
- type: nauc_mrr_at_1000_max
value: 46.450918836314216
- type: nauc_mrr_at_1000_std
value: -10.897096706171377
- type: nauc_mrr_at_100_diff1
value: 67.90978153374142
- type: nauc_mrr_at_100_max
value: 46.45801498811678
- type: nauc_mrr_at_100_std
value: -10.889452971557144
- type: nauc_mrr_at_10_diff1
value: 67.85232774207358
- type: nauc_mrr_at_10_max
value: 46.519322725477366
- type: nauc_mrr_at_10_std
value: -10.850819066119888
- type: nauc_mrr_at_1_diff1
value: 69.24726718948668
- type: nauc_mrr_at_1_max
value: 43.958413687770644
- type: nauc_mrr_at_1_std
value: -12.056753426789658
- type: nauc_mrr_at_20_diff1
value: 67.89964178495697
- type: nauc_mrr_at_20_max
value: 46.511653631886404
- type: nauc_mrr_at_20_std
value: -10.839214368831332
- type: nauc_mrr_at_3_diff1
value: 67.5836395057384
- type: nauc_mrr_at_3_max
value: 46.669184506889465
- type: nauc_mrr_at_3_std
value: -11.179530780325097
- type: nauc_mrr_at_5_diff1
value: 67.77665440172093
- type: nauc_mrr_at_5_max
value: 46.573672833105725
- type: nauc_mrr_at_5_std
value: -10.982788041572968
- type: nauc_ndcg_at_1000_diff1
value: 21.116945524743244
- type: nauc_ndcg_at_1000_max
value: 26.331821580979415
- type: nauc_ndcg_at_1000_std
value: 2.2115411230013993
- type: nauc_ndcg_at_100_diff1
value: 19.998679336096366
- type: nauc_ndcg_at_100_max
value: 25.965625801662146
- type: nauc_ndcg_at_100_std
value: 2.828817915487286
- type: nauc_ndcg_at_10_diff1
value: 19.806466897776797
- type: nauc_ndcg_at_10_max
value: 25.419244862350304
- type: nauc_ndcg_at_10_std
value: 0.2155926935521766
- type: nauc_ndcg_at_1_diff1
value: 69.24726718948668
- type: nauc_ndcg_at_1_max
value: 43.958413687770644
- type: nauc_ndcg_at_1_std
value: -12.056753426789658
- type: nauc_ndcg_at_20_diff1
value: 19.547932237059364
- type: nauc_ndcg_at_20_max
value: 25.539888431109336
- type: nauc_ndcg_at_20_std
value: 1.6229496555874041
- type: nauc_ndcg_at_3_diff1
value: 23.915468237770344
- type: nauc_ndcg_at_3_max
value: 26.483987322133835
- type: nauc_ndcg_at_3_std
value: -3.927672975648966
- type: nauc_ndcg_at_5_diff1
value: 21.285580255116123
- type: nauc_ndcg_at_5_max
value: 25.39329283776291
- type: nauc_ndcg_at_5_std
value: -1.9981992190798898
- type: nauc_precision_at_1000_diff1
value: -16.397996018930517
- type: nauc_precision_at_1000_max
value: 12.038228696443355
- type: nauc_precision_at_1000_std
value: 30.699566406872442
- type: nauc_precision_at_100_diff1
value: -11.55484201940981
- type: nauc_precision_at_100_max
value: 13.542075140974724
- type: nauc_precision_at_100_std
value: 24.606150356117055
- type: nauc_precision_at_10_diff1
value: -3.0258154194368907
- type: nauc_precision_at_10_max
value: 15.656448807768248
- type: nauc_precision_at_10_std
value: 8.819867674731508
- type: nauc_precision_at_1_diff1
value: 69.24726718948668
- type: nauc_precision_at_1_max
value: 43.958413687770644
- type: nauc_precision_at_1_std
value: -12.056753426789658
- type: nauc_precision_at_20_diff1
value: -6.346117648054698
- type: nauc_precision_at_20_max
value: 14.67028697593907
- type: nauc_precision_at_20_std
value: 14.430033095760397
- type: nauc_precision_at_3_diff1
value: 9.012431714387436
- type: nauc_precision_at_3_max
value: 20.29633246829934
- type: nauc_precision_at_3_std
value: -0.8697076229386467
- type: nauc_precision_at_5_diff1
value: 2.5992309960691435
- type: nauc_precision_at_5_max
value: 16.960051232392598
- type: nauc_precision_at_5_std
value: 3.0677906197565945
- type: nauc_recall_at_1000_diff1
value: -16.397996018930495
- type: nauc_recall_at_1000_max
value: 12.038228696443342
- type: nauc_recall_at_1000_std
value: 30.69956640687237
- type: nauc_recall_at_100_diff1
value: -11.55484201940982
- type: nauc_recall_at_100_max
value: 13.542075140974749
- type: nauc_recall_at_100_std
value: 24.60615035611708
- type: nauc_recall_at_10_diff1
value: -3.025815419436788
- type: nauc_recall_at_10_max
value: 15.656448807768314
- type: nauc_recall_at_10_std
value: 8.819867674731574
- type: nauc_recall_at_1_diff1
value: 69.24726718948668
- type: nauc_recall_at_1_max
value: 43.958413687770644
- type: nauc_recall_at_1_std
value: -12.056753426789658
- type: nauc_recall_at_20_diff1
value: -6.346117648054507
- type: nauc_recall_at_20_max
value: 14.670286975939165
- type: nauc_recall_at_20_std
value: 14.430033095760383
- type: nauc_recall_at_3_diff1
value: 9.012431714387384
- type: nauc_recall_at_3_max
value: 20.296332468299312
- type: nauc_recall_at_3_std
value: -0.8697076229386763
- type: nauc_recall_at_5_diff1
value: 2.599230996069216
- type: nauc_recall_at_5_max
value: 16.960051232392622
- type: nauc_recall_at_5_std
value: 3.0677906197565834
- type: ndcg_at_1
value: 76.583
- type: ndcg_at_10
value: 67.836
- type: ndcg_at_100
value: 70.884
- type: ndcg_at_1000
value: 72.085
- type: ndcg_at_20
value: 69.149
- type: ndcg_at_3
value: 62.434
- type: ndcg_at_5
value: 65.508
- type: precision_at_1
value: 76.583
- type: precision_at_10
value: 14.282
- type: precision_at_100
value: 1.6650000000000003
- type: precision_at_1000
value: 0.182
- type: precision_at_20
value: 7.564
- type: precision_at_3
value: 39.684999999999995
- type: precision_at_5
value: 26.239
- type: recall_at_1
value: 38.292
- type: recall_at_10
value: 71.411
- type: recall_at_100
value: 83.255
- type: recall_at_1000
value: 91.182
- type: recall_at_20
value: 75.645
- type: recall_at_3
value: 59.526999999999994
- type: recall_at_5
value: 65.598
task:
type: Retrieval
- dataset:
config: default
name: MTEB ImdbClassification
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
split: test
type: mteb/imdb
metrics:
- type: accuracy
value: 91.6012
- type: ap
value: 88.68235390495911
- type: ap_weighted
value: 88.68235390495911
- type: f1
value: 91.59668455015077
- type: f1_weighted
value: 91.59668455015077
- type: main_score
value: 91.6012
task:
type: Classification
- dataset:
config: default
name: MTEB MSMARCO
revision: c5a29a104738b98a9e76336939199e264163d4a0
split: dev
type: mteb/msmarco
metrics:
- type: main_score
value: 34.216
- type: map_at_1
value: 15.038000000000002
- type: map_at_10
value: 27.046
- type: map_at_100
value: 28.389999999999997
- type: map_at_1000
value: 28.444999999999997
- type: map_at_20
value: 27.872000000000003
- type: map_at_3
value: 22.834
- type: map_at_5
value: 25.153
- type: mrr_at_1
value: 15.4297994269341
- type: mrr_at_10
value: 27.478492973120332
- type: mrr_at_100
value: 28.777080396786463
- type: mrr_at_1000
value: 28.825658730635972
- type: mrr_at_20
value: 28.286636068476597
- type: mrr_at_3
value: 23.33333333333318
- type: mrr_at_5
value: 25.614851957975105
- type: nauc_map_at_1000_diff1
value: 27.54679600584162
- type: nauc_map_at_1000_max
value: 0.41510056128863393
- type: nauc_map_at_1000_std
value: -21.25666818469523
- type: nauc_map_at_100_diff1
value: 27.549865152926362
- type: nauc_map_at_100_max
value: 0.41049620236650397
- type: nauc_map_at_100_std
value: -21.23460305948801
- type: nauc_map_at_10_diff1
value: 27.46238928310728
- type: nauc_map_at_10_max
value: 0.3112462662068356
- type: nauc_map_at_10_std
value: -22.07687152339386
- type: nauc_map_at_1_diff1
value: 30.7476883639058
- type: nauc_map_at_1_max
value: -0.5565808781243076
- type: nauc_map_at_1_std
value: -19.834927817494012
- type: nauc_map_at_20_diff1
value: 27.545155440501322
- type: nauc_map_at_20_max
value: 0.3473346558072676
- type: nauc_map_at_20_std
value: -21.61961934965919
- type: nauc_map_at_3_diff1
value: 27.39879856077741
- type: nauc_map_at_3_max
value: 0.06402240126581103
- type: nauc_map_at_3_std
value: -21.617551469899993
- type: nauc_map_at_5_diff1
value: 27.301329953007926
- type: nauc_map_at_5_max
value: 0.06942838790190704
- type: nauc_map_at_5_std
value: -22.27190645444131
- type: nauc_mrr_at_1000_diff1
value: 27.270571100450564
- type: nauc_mrr_at_1000_max
value: 0.5200299838701339
- type: nauc_mrr_at_1000_std
value: -21.00132445753325
- type: nauc_mrr_at_100_diff1
value: 27.270120718986174
- type: nauc_mrr_at_100_max
value: 0.522377923623997
- type: nauc_mrr_at_100_std
value: -20.974058126628332
- type: nauc_mrr_at_10_diff1
value: 27.170393202051947
- type: nauc_mrr_at_10_max
value: 0.48873943205852266
- type: nauc_mrr_at_10_std
value: -21.738471675337966
- type: nauc_mrr_at_1_diff1
value: 30.283202962075705
- type: nauc_mrr_at_1_max
value: -0.5898023407161855
- type: nauc_mrr_at_1_std
value: -19.75269473049021
- type: nauc_mrr_at_20_diff1
value: 27.274300680490825
- type: nauc_mrr_at_20_max
value: 0.5104058227528672
- type: nauc_mrr_at_20_std
value: -21.30268935462482
- type: nauc_mrr_at_3_diff1
value: 27.10789072891654
- type: nauc_mrr_at_3_max
value: 0.17628020950576678
- type: nauc_mrr_at_3_std
value: -21.472874492804447
- type: nauc_mrr_at_5_diff1
value: 27.042048354996385
- type: nauc_mrr_at_5_max
value: 0.20508452891098314
- type: nauc_mrr_at_5_std
value: -22.006377363109006
- type: nauc_ndcg_at_1000_diff1
value: 27.150914472847965
- type: nauc_ndcg_at_1000_max
value: 1.5041133804769482
- type: nauc_ndcg_at_1000_std
value: -19.524926037821043
- type: nauc_ndcg_at_100_diff1
value: 27.228817990238145
- type: nauc_ndcg_at_100_max
value: 1.5569549852164712
- type: nauc_ndcg_at_100_std
value: -18.37783977195916
- type: nauc_ndcg_at_10_diff1
value: 26.974908852930785
- type: nauc_ndcg_at_10_max
value: 0.9865201816077211
- type: nauc_ndcg_at_10_std
value: -22.744315865574556
- type: nauc_ndcg_at_1_diff1
value: 30.283202962075705
- type: nauc_ndcg_at_1_max
value: -0.5898023407161855
- type: nauc_ndcg_at_1_std
value: -19.75269473049021
- type: nauc_ndcg_at_20_diff1
value: 27.256057260883644
- type: nauc_ndcg_at_20_max
value: 1.1507498856530942
- type: nauc_ndcg_at_20_std
value: -21.119059014816134
- type: nauc_ndcg_at_3_diff1
value: 26.65932420136448
- type: nauc_ndcg_at_3_max
value: 0.36047390996708306
- type: nauc_ndcg_at_3_std
value: -22.129146087673426
- type: nauc_ndcg_at_5_diff1
value: 26.553136747559307
- type: nauc_ndcg_at_5_max
value: 0.3914050774004603
- type: nauc_ndcg_at_5_std
value: -23.162245106694787
- type: nauc_precision_at_1000_diff1
value: -3.219536411196315
- type: nauc_precision_at_1000_max
value: 18.58643056260195
- type: nauc_precision_at_1000_std
value: 13.96483533268961
- type: nauc_precision_at_100_diff1
value: 15.240824308438475
- type: nauc_precision_at_100_max
value: 12.873759519468777
- type: nauc_precision_at_100_std
value: 12.669885011350335
- type: nauc_precision_at_10_diff1
value: 24.02551103443631
- type: nauc_precision_at_10_max
value: 3.3412304054256636
- type: nauc_precision_at_10_std
value: -23.53436237582242
- type: nauc_precision_at_1_diff1
value: 30.283202962075705
- type: nauc_precision_at_1_max
value: -0.5898023407161855
- type: nauc_precision_at_1_std
value: -19.75269473049021
- type: nauc_precision_at_20_diff1
value: 23.383618639354207
- type: nauc_precision_at_20_max
value: 5.1273224302435505
- type: nauc_precision_at_20_std
value: -16.069542485279715
- type: nauc_precision_at_3_diff1
value: 24.289430079622484
- type: nauc_precision_at_3_max
value: 1.0047590622521345
- type: nauc_precision_at_3_std
value: -23.3073066696005
- type: nauc_precision_at_5_diff1
value: 23.91964787477001
- type: nauc_precision_at_5_max
value: 1.503705757938403
- type: nauc_precision_at_5_std
value: -25.080465306807003
- type: nauc_recall_at_1000_diff1
value: 18.559018331553045
- type: nauc_recall_at_1000_max
value: 41.916214927217126
- type: nauc_recall_at_1000_std
value: 59.856708470758704
- type: nauc_recall_at_100_diff1
value: 26.471212604023354
- type: nauc_recall_at_100_max
value: 10.077350060389897
- type: nauc_recall_at_100_std
value: 14.153565507764215
- type: nauc_recall_at_10_diff1
value: 26.05741155724461
- type: nauc_recall_at_10_max
value: 2.6492884997120534
- type: nauc_recall_at_10_std
value: -24.546907108105746
- type: nauc_recall_at_1_diff1
value: 30.7476883639058
- type: nauc_recall_at_1_max
value: -0.5565808781243076
- type: nauc_recall_at_1_std
value: -19.834927817494012
- type: nauc_recall_at_20_diff1
value: 26.95859513457893
- type: nauc_recall_at_20_max
value: 3.521141192333191
- type: nauc_recall_at_20_std
value: -18.30474468147818
- type: nauc_recall_at_3_diff1
value: 25.01086599052385
- type: nauc_recall_at_3_max
value: 0.9901526603339225
- type: nauc_recall_at_3_std
value: -23.299664759244102
- type: nauc_recall_at_5_diff1
value: 24.792290263748747
- type: nauc_recall_at_5_max
value: 0.9968092335084938
- type: nauc_recall_at_5_std
value: -25.345195391263754
- type: ndcg_at_1
value: 15.43
- type: ndcg_at_10
value: 34.216
- type: ndcg_at_100
value: 40.815
- type: ndcg_at_1000
value: 42.202
- type: ndcg_at_20
value: 37.179
- type: ndcg_at_3
value: 25.588
- type: ndcg_at_5
value: 29.724
- type: precision_at_1
value: 15.43
- type: precision_at_10
value: 5.918
- type: precision_at_100
value: 0.922
- type: precision_at_1000
value: 0.104
- type: precision_at_20
value: 3.5700000000000003
- type: precision_at_3
value: 11.442
- type: precision_at_5
value: 8.966000000000001
- type: recall_at_1
value: 15.038000000000002
- type: recall_at_10
value: 56.627
- type: recall_at_100
value: 87.399
- type: recall_at_1000
value: 98.009
- type: recall_at_20
value: 68.176
- type: recall_at_3
value: 33.056000000000004
- type: recall_at_5
value: 42.995
task:
type: Retrieval
- dataset:
config: en
name: MTEB MTOPDomainClassification (en)
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
split: test
type: mteb/mtop_domain
metrics:
- type: accuracy
value: 89.54172366621066
- type: f1
value: 88.86345617269791
- type: f1_weighted
value: 89.39824737643146
- type: main_score
value: 89.54172366621066
task:
type: Classification
- dataset:
config: en
name: MTEB MTOPIntentClassification (en)
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
split: test
type: mteb/mtop_intent
metrics:
- type: accuracy
value: 62.08162334701323
- type: f1
value: 43.12730019766516
- type: f1_weighted
value: 63.781545502237925
- type: main_score
value: 62.08162334701323
task:
type: Classification
- dataset:
config: en
name: MTEB MassiveIntentClassification (en)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 70.35642232683254
- type: f1
value: 68.72302949991845
- type: f1_weighted
value: 69.3283349884127
- type: main_score
value: 70.35642232683254
task:
type: Classification
- dataset:
config: en
name: MTEB MassiveScenarioClassification (en)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 75.72965702757229
- type: f1
value: 75.45057853223203
- type: f1_weighted
value: 75.51989582351723
- type: main_score
value: 75.72965702757229
task:
type: Classification
- dataset:
config: default
name: MTEB MedrxivClusteringP2P
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
split: test
type: mteb/medrxiv-clustering-p2p
metrics:
- type: main_score
value: 33.84359193475579
- type: v_measure
value: 33.84359193475579
- type: v_measure_std
value: 1.206510814601397
task:
type: Clustering
- dataset:
config: default
name: MTEB MedrxivClusteringS2S
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
split: test
type: mteb/medrxiv-clustering-s2s
metrics:
- type: main_score
value: 32.43240060668634
- type: v_measure
value: 32.43240060668634
- type: v_measure_std
value: 1.4462915088372668
task:
type: Clustering
- dataset:
config: default
name: MTEB MindSmallReranking
revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
split: test
type: mteb/mind_small
metrics:
- type: main_score
value: 32.17562277399934
- type: map
value: 32.17562277399934
- type: mrr
value: 33.359132186523716
- type: nAUC_map_diff1
value: 9.64301950935433
- type: nAUC_map_max
value: -21.474489295623783
- type: nAUC_map_std
value: -2.9044953039946035
- type: nAUC_mrr_diff1
value: 9.376542394215578
- type: nAUC_mrr_max
value: -15.773926504219354
- type: nAUC_mrr_std
value: -0.751930669185602
task:
type: Reranking
- dataset:
config: default
name: MTEB NFCorpus
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
split: test
type: mteb/nfcorpus
metrics:
- type: main_score
value: 33.816
- type: map_at_1
value: 4.893
- type: map_at_10
value: 12.154
- type: map_at_100
value: 15.486
- type: map_at_1000
value: 16.952
- type: map_at_20
value: 13.424
- type: map_at_3
value: 8.819
- type: map_at_5
value: 10.238999999999999
- type: mrr_at_1
value: 42.10526315789473
- type: mrr_at_10
value: 52.01742100348912
- type: mrr_at_100
value: 52.6554997087846
- type: mrr_at_1000
value: 52.69599552159355
- type: mrr_at_20
value: 52.51069271775405
- type: mrr_at_3
value: 49.79360165118682
- type: mrr_at_5
value: 50.86171310629517
- type: nauc_map_at_1000_diff1
value: 22.910384139189237
- type: nauc_map_at_1000_max
value: 30.904545032635593
- type: nauc_map_at_1000_std
value: 13.256381971531022
- type: nauc_map_at_100_diff1
value: 23.657922060794174
- type: nauc_map_at_100_max
value: 30.463171555444095
- type: nauc_map_at_100_std
value: 9.403207435293652
- type: nauc_map_at_10_diff1
value: 26.99577933867989
- type: nauc_map_at_10_max
value: 25.74855919514706
- type: nauc_map_at_10_std
value: -1.946481502724064
- type: nauc_map_at_1_diff1
value: 40.87773635213689
- type: nauc_map_at_1_max
value: 18.50327114064488
- type: nauc_map_at_1_std
value: -12.884353353702357
- type: nauc_map_at_20_diff1
value: 25.182212498762404
- type: nauc_map_at_20_max
value: 27.726995459601568
- type: nauc_map_at_20_std
value: 2.265717944376315
- type: nauc_map_at_3_diff1
value: 32.24473894835545
- type: nauc_map_at_3_max
value: 19.73101542872105
- type: nauc_map_at_3_std
value: -10.159375851390948
- type: nauc_map_at_5_diff1
value: 30.660429521421523
- type: nauc_map_at_5_max
value: 22.777642402610702
- type: nauc_map_at_5_std
value: -6.784458070696157
- type: nauc_mrr_at_1000_diff1
value: 35.540967575378694
- type: nauc_mrr_at_1000_max
value: 43.94574660779749
- type: nauc_mrr_at_1000_std
value: 24.857915852637742
- type: nauc_mrr_at_100_diff1
value: 35.54094740404627
- type: nauc_mrr_at_100_max
value: 43.9872938663598
- type: nauc_mrr_at_100_std
value: 24.908343520366564
- type: nauc_mrr_at_10_diff1
value: 35.499666044876456
- type: nauc_mrr_at_10_max
value: 43.372579438993235
- type: nauc_mrr_at_10_std
value: 24.55532928065396
- type: nauc_mrr_at_1_diff1
value: 38.71056728463544
- type: nauc_mrr_at_1_max
value: 39.77501110624803
- type: nauc_mrr_at_1_std
value: 18.0097891637449
- type: nauc_mrr_at_20_diff1
value: 35.4778364740954
- type: nauc_mrr_at_20_max
value: 43.861500828057984
- type: nauc_mrr_at_20_std
value: 24.844940828191785
- type: nauc_mrr_at_3_diff1
value: 36.14951749215073
- type: nauc_mrr_at_3_max
value: 43.66290737939861
- type: nauc_mrr_at_3_std
value: 23.797433124588736
- type: nauc_mrr_at_5_diff1
value: 35.43660972677152
- type: nauc_mrr_at_5_max
value: 43.45685670163132
- type: nauc_mrr_at_5_std
value: 24.304648467662023
- type: nauc_ndcg_at_1000_diff1
value: 22.759045127619025
- type: nauc_ndcg_at_1000_max
value: 44.41137470197231
- type: nauc_ndcg_at_1000_std
value: 31.38899922811944
- type: nauc_ndcg_at_100_diff1
value: 21.163726384696464
- type: nauc_ndcg_at_100_max
value: 39.3884922679833
- type: nauc_ndcg_at_100_std
value: 25.839289801954113
- type: nauc_ndcg_at_10_diff1
value: 22.897812670264933
- type: nauc_ndcg_at_10_max
value: 36.65843413176893
- type: nauc_ndcg_at_10_std
value: 24.11394501649861
- type: nauc_ndcg_at_1_diff1
value: 39.06334823564591
- type: nauc_ndcg_at_1_max
value: 39.06248799073769
- type: nauc_ndcg_at_1_std
value: 18.05518784959287
- type: nauc_ndcg_at_20_diff1
value: 21.898686330422414
- type: nauc_ndcg_at_20_max
value: 35.78404933092488
- type: nauc_ndcg_at_20_std
value: 24.304058306037895
- type: nauc_ndcg_at_3_diff1
value: 29.999089941995827
- type: nauc_ndcg_at_3_max
value: 38.55806893862189
- type: nauc_ndcg_at_3_std
value: 20.82150155152541
- type: nauc_ndcg_at_5_diff1
value: 26.920523658582933
- type: nauc_ndcg_at_5_max
value: 37.903305784392835
- type: nauc_ndcg_at_5_std
value: 22.36973654091273
- type: nauc_precision_at_1000_diff1
value: -4.736357828440193
- type: nauc_precision_at_1000_max
value: 5.778552685188162
- type: nauc_precision_at_1000_std
value: 36.06941146251687
- type: nauc_precision_at_100_diff1
value: -3.915151057855969
- type: nauc_precision_at_100_max
value: 18.188180874141302
- type: nauc_precision_at_100_std
value: 44.921932315349935
- type: nauc_precision_at_10_diff1
value: 6.335673291245972
- type: nauc_precision_at_10_max
value: 33.54781851431339
- type: nauc_precision_at_10_std
value: 36.77684118708833
- type: nauc_precision_at_1_diff1
value: 38.71056728463544
- type: nauc_precision_at_1_max
value: 39.77501110624803
- type: nauc_precision_at_1_std
value: 18.0097891637449
- type: nauc_precision_at_20_diff1
value: 2.937163642087222
- type: nauc_precision_at_20_max
value: 28.379243786948336
- type: nauc_precision_at_20_std
value: 40.35532758983976
- type: nauc_precision_at_3_diff1
value: 20.784494867231487
- type: nauc_precision_at_3_max
value: 38.495138401646045
- type: nauc_precision_at_3_std
value: 25.482915117972993
- type: nauc_precision_at_5_diff1
value: 15.127184520975657
- type: nauc_precision_at_5_max
value: 37.30602533471322
- type: nauc_precision_at_5_std
value: 29.930880073455175
- type: nauc_recall_at_1000_diff1
value: 2.3913140928424705
- type: nauc_recall_at_1000_max
value: 20.737140424377333
- type: nauc_recall_at_1000_std
value: 18.01670749520214
- type: nauc_recall_at_100_diff1
value: 7.687164842123094
- type: nauc_recall_at_100_max
value: 23.62069259941976
- type: nauc_recall_at_100_std
value: 14.411637818706472
- type: nauc_recall_at_10_diff1
value: 18.678074331558783
- type: nauc_recall_at_10_max
value: 19.514135963995347
- type: nauc_recall_at_10_std
value: -2.8989513830052713
- type: nauc_recall_at_1_diff1
value: 40.87773635213689
- type: nauc_recall_at_1_max
value: 18.50327114064488
- type: nauc_recall_at_1_std
value: -12.884353353702357
- type: nauc_recall_at_20_diff1
value: 14.926936076283534
- type: nauc_recall_at_20_max
value: 22.342969389987594
- type: nauc_recall_at_20_std
value: 2.6680867208648666
- type: nauc_recall_at_3_diff1
value: 26.592132793572855
- type: nauc_recall_at_3_max
value: 16.71686152308387
- type: nauc_recall_at_3_std
value: -10.161239210194816
- type: nauc_recall_at_5_diff1
value: 24.899494230211914
- type: nauc_recall_at_5_max
value: 19.59649962842324
- type: nauc_recall_at_5_std
value: -6.76370389227844
- type: ndcg_at_1
value: 40.867
- type: ndcg_at_10
value: 33.816
- type: ndcg_at_100
value: 31.239
- type: ndcg_at_1000
value: 39.879
- type: ndcg_at_20
value: 31.423000000000002
- type: ndcg_at_3
value: 38.911
- type: ndcg_at_5
value: 36.61
- type: precision_at_1
value: 42.105
- type: precision_at_10
value: 25.635
- type: precision_at_100
value: 8.176
- type: precision_at_1000
value: 2.092
- type: precision_at_20
value: 18.823999999999998
- type: precision_at_3
value: 37.461
- type: precision_at_5
value: 32.507999999999996
- type: recall_at_1
value: 4.893
- type: recall_at_10
value: 16.773
- type: recall_at_100
value: 32.958999999999996
- type: recall_at_1000
value: 64.094
- type: recall_at_20
value: 20.557
- type: recall_at_3
value: 10.263
- type: recall_at_5
value: 12.388
task:
type: Retrieval
- dataset:
config: default
name: MTEB NQ
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
split: test
type: mteb/nq
metrics:
- type: main_score
value: 47.705999999999996
- type: map_at_1
value: 24.09
- type: map_at_10
value: 39.287
- type: map_at_100
value: 40.567
- type: map_at_1000
value: 40.6
- type: map_at_20
value: 40.148
- type: map_at_3
value: 34.302
- type: map_at_5
value: 37.206
- type: mrr_at_1
value: 27.28852838933951
- type: mrr_at_10
value: 41.73792740348356
- type: mrr_at_100
value: 42.700956318341376
- type: mrr_at_1000
value: 42.721500078814096
- type: mrr_at_20
value: 42.39774668731353
- type: mrr_at_3
value: 37.35032831208959
- type: mrr_at_5
value: 40.00965623792975
- type: nauc_map_at_1000_diff1
value: 26.995052198015408
- type: nauc_map_at_1000_max
value: 15.20926829878716
- type: nauc_map_at_1000_std
value: -7.419434404678649
- type: nauc_map_at_100_diff1
value: 26.98675665686633
- type: nauc_map_at_100_max
value: 15.232441822080464
- type: nauc_map_at_100_std
value: -7.3860325680943655
- type: nauc_map_at_10_diff1
value: 27.2055488472847
- type: nauc_map_at_10_max
value: 15.22405773845232
- type: nauc_map_at_10_std
value: -7.997911271237045
- type: nauc_map_at_1_diff1
value: 28.974098579091123
- type: nauc_map_at_1_max
value: 11.321507460392628
- type: nauc_map_at_1_std
value: -7.640518561754067
- type: nauc_map_at_20_diff1
value: 26.975519720067403
- type: nauc_map_at_20_max
value: 15.270333199937241
- type: nauc_map_at_20_std
value: -7.593162904909118
- type: nauc_map_at_3_diff1
value: 26.196529957905334
- type: nauc_map_at_3_max
value: 13.478166583287848
- type: nauc_map_at_3_std
value: -9.053865282739968
- type: nauc_map_at_5_diff1
value: 26.79122911875148
- type: nauc_map_at_5_max
value: 14.282446217191469
- type: nauc_map_at_5_std
value: -9.094186973353946
- type: nauc_mrr_at_1000_diff1
value: 26.759927337618993
- type: nauc_mrr_at_1000_max
value: 14.825954255654228
- type: nauc_mrr_at_1000_std
value: -6.105406137980129
- type: nauc_mrr_at_100_diff1
value: 26.74960844122087
- type: nauc_mrr_at_100_max
value: 14.843683127357762
- type: nauc_mrr_at_100_std
value: -6.076356380149935
- type: nauc_mrr_at_10_diff1
value: 26.944765214641325
- type: nauc_mrr_at_10_max
value: 14.94642107131636
- type: nauc_mrr_at_10_std
value: -6.336027654512049
- type: nauc_mrr_at_1_diff1
value: 28.63557135887537
- type: nauc_mrr_at_1_max
value: 11.997480919271911
- type: nauc_mrr_at_1_std
value: -6.415779575057592
- type: nauc_mrr_at_20_diff1
value: 26.707684527732884
- type: nauc_mrr_at_20_max
value: 14.891955656316206
- type: nauc_mrr_at_20_std
value: -6.170926409650526
- type: nauc_mrr_at_3_diff1
value: 26.09833571219951
- type: nauc_mrr_at_3_max
value: 13.619335397303093
- type: nauc_mrr_at_3_std
value: -6.99260621640241
- type: nauc_mrr_at_5_diff1
value: 26.509106156499758
- type: nauc_mrr_at_5_max
value: 14.309307369143232
- type: nauc_mrr_at_5_std
value: -7.036129929142912
- type: nauc_ndcg_at_1000_diff1
value: 26.58998518885675
- type: nauc_ndcg_at_1000_max
value: 16.730704716377872
- type: nauc_ndcg_at_1000_std
value: -5.39551318704605
- type: nauc_ndcg_at_100_diff1
value: 26.367304449158542
- type: nauc_ndcg_at_100_max
value: 17.497911381186437
- type: nauc_ndcg_at_100_std
value: -4.274806854701229
- type: nauc_ndcg_at_10_diff1
value: 27.275827813350823
- type: nauc_ndcg_at_10_max
value: 17.61502848669633
- type: nauc_ndcg_at_10_std
value: -6.706786953638304
- type: nauc_ndcg_at_1_diff1
value: 28.73750705322627
- type: nauc_ndcg_at_1_max
value: 12.034842420318594
- type: nauc_ndcg_at_1_std
value: -6.331175328355812
- type: nauc_ndcg_at_20_diff1
value: 26.334025198409822
- type: nauc_ndcg_at_20_max
value: 17.855473370518965
- type: nauc_ndcg_at_20_std
value: -5.403020940844481
- type: nauc_ndcg_at_3_diff1
value: 25.45388148358677
- type: nauc_ndcg_at_3_max
value: 14.079983701064627
- type: nauc_ndcg_at_3_std
value: -8.890083252778314
- type: nauc_ndcg_at_5_diff1
value: 26.33612130048854
- type: nauc_ndcg_at_5_max
value: 15.450244767383477
- type: nauc_ndcg_at_5_std
value: -9.054428820466049
- type: nauc_precision_at_1000_diff1
value: -5.4513464358643935
- type: nauc_precision_at_1000_max
value: 5.371939619810606
- type: nauc_precision_at_1000_std
value: 14.8654667034019
- type: nauc_precision_at_100_diff1
value: -1.3987377525099691
- type: nauc_precision_at_100_max
value: 13.911794092689838
- type: nauc_precision_at_100_std
value: 21.429657983736398
- type: nauc_precision_at_10_diff1
value: 17.11455042469293
- type: nauc_precision_at_10_max
value: 22.09155979887235
- type: nauc_precision_at_10_std
value: 4.5779383691575335
- type: nauc_precision_at_1_diff1
value: 28.73750705322627
- type: nauc_precision_at_1_max
value: 12.034842420318594
- type: nauc_precision_at_1_std
value: -6.331175328355812
- type: nauc_precision_at_20_diff1
value: 8.866920301402327
- type: nauc_precision_at_20_max
value: 20.465524038064146
- type: nauc_precision_at_20_std
value: 11.77414197569535
- type: nauc_precision_at_3_diff1
value: 20.723368404844305
- type: nauc_precision_at_3_max
value: 16.257890926808553
- type: nauc_precision_at_3_std
value: -6.290754270412709
- type: nauc_precision_at_5_diff1
value: 20.209421398374488
- type: nauc_precision_at_5_max
value: 18.627423971893325
- type: nauc_precision_at_5_std
value: -4.6989054258140355
- type: nauc_recall_at_1000_diff1
value: 16.326550389848265
- type: nauc_recall_at_1000_max
value: 72.55345747292822
- type: nauc_recall_at_1000_std
value: 63.7692611505317
- type: nauc_recall_at_100_diff1
value: 16.03698346212984
- type: nauc_recall_at_100_max
value: 50.432030846802064
- type: nauc_recall_at_100_std
value: 43.37937315409283
- type: nauc_recall_at_10_diff1
value: 26.91743922623231
- type: nauc_recall_at_10_max
value: 26.28334350051652
- type: nauc_recall_at_10_std
value: -3.6769327984943248
- type: nauc_recall_at_1_diff1
value: 28.974098579091123
- type: nauc_recall_at_1_max
value: 11.321507460392628
- type: nauc_recall_at_1_std
value: -7.640518561754067
- type: nauc_recall_at_20_diff1
value: 21.32293933043855
- type: nauc_recall_at_20_max
value: 31.996089227364994
- type: nauc_recall_at_20_std
value: 5.0730478086085995
- type: nauc_recall_at_3_diff1
value: 22.708520483632753
- type: nauc_recall_at_3_max
value: 14.897940279836913
- type: nauc_recall_at_3_std
value: -10.081304729280403
- type: nauc_recall_at_5_diff1
value: 24.140285353276628
- type: nauc_recall_at_5_max
value: 17.99130898455
- type: nauc_recall_at_5_std
value: -11.006510541854203
- type: ndcg_at_1
value: 27.26
- type: ndcg_at_10
value: 47.705999999999996
- type: ndcg_at_100
value: 53.016
- type: ndcg_at_1000
value: 53.715
- type: ndcg_at_20
value: 50.498
- type: ndcg_at_3
value: 38.124
- type: ndcg_at_5
value: 43.097
- type: precision_at_1
value: 27.26
- type: precision_at_10
value: 8.447000000000001
- type: precision_at_100
value: 1.139
- type: precision_at_1000
value: 0.121
- type: precision_at_20
value: 4.874
- type: precision_at_3
value: 17.835
- type: precision_at_5
value: 13.517000000000001
- type: recall_at_1
value: 24.09
- type: recall_at_10
value: 71.10600000000001
- type: recall_at_100
value: 93.953
- type: recall_at_1000
value: 99.073
- type: recall_at_20
value: 81.523
- type: recall_at_3
value: 46.174
- type: recall_at_5
value: 57.677
task:
type: Retrieval
- dataset:
config: default
name: MTEB QuoraRetrieval
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
split: test
type: mteb/quora
metrics:
- type: main_score
value: 89.676
- type: map_at_1
value: 72.103
- type: map_at_10
value: 86.14500000000001
- type: map_at_100
value: 86.765
- type: map_at_1000
value: 86.776
- type: map_at_20
value: 86.562
- type: map_at_3
value: 83.214
- type: map_at_5
value: 85.103
- type: mrr_at_1
value: 83.05
- type: mrr_at_10
value: 88.93702380952368
- type: mrr_at_100
value: 89.01863878447548
- type: mrr_at_1000
value: 89.01885795102484
- type: mrr_at_20
value: 88.99974718680856
- type: mrr_at_3
value: 88.08333333333313
- type: mrr_at_5
value: 88.71633333333311
- type: nauc_map_at_1000_diff1
value: 78.13997479130329
- type: nauc_map_at_1000_max
value: 33.16799361159121
- type: nauc_map_at_1000_std
value: -55.1863277755837
- type: nauc_map_at_100_diff1
value: 78.14023553984367
- type: nauc_map_at_100_max
value: 33.13369714413867
- type: nauc_map_at_100_std
value: -55.23540842004624
- type: nauc_map_at_10_diff1
value: 78.37080186192892
- type: nauc_map_at_10_max
value: 32.57134371768262
- type: nauc_map_at_10_std
value: -57.373890318858635
- type: nauc_map_at_1_diff1
value: 81.43018798912361
- type: nauc_map_at_1_max
value: 25.19409927583946
- type: nauc_map_at_1_std
value: -48.22311263550707
- type: nauc_map_at_20_diff1
value: 78.2531228519997
- type: nauc_map_at_20_max
value: 32.93544556033276
- type: nauc_map_at_20_std
value: -56.1055098795547
- type: nauc_map_at_3_diff1
value: 78.87676183243428
- type: nauc_map_at_3_max
value: 30.20611964511498
- type: nauc_map_at_3_std
value: -58.43976419533779
- type: nauc_map_at_5_diff1
value: 78.74187209420451
- type: nauc_map_at_5_max
value: 31.54047365144067
- type: nauc_map_at_5_std
value: -58.97219700125237
- type: nauc_mrr_at_1000_diff1
value: 78.95748141758239
- type: nauc_mrr_at_1000_max
value: 35.915215848182335
- type: nauc_mrr_at_1000_std
value: -51.60783225234237
- type: nauc_mrr_at_100_diff1
value: 78.95727688352294
- type: nauc_mrr_at_100_max
value: 35.915856450202206
- type: nauc_mrr_at_100_std
value: -51.60782742807526
- type: nauc_mrr_at_10_diff1
value: 78.97062716064038
- type: nauc_mrr_at_10_max
value: 35.98944352252478
- type: nauc_mrr_at_10_std
value: -51.77952280125023
- type: nauc_mrr_at_1_diff1
value: 79.56130369111403
- type: nauc_mrr_at_1_max
value: 35.942655751158995
- type: nauc_mrr_at_1_std
value: -48.53333294529543
- type: nauc_mrr_at_20_diff1
value: 78.96215019750328
- type: nauc_mrr_at_20_max
value: 35.91684162704735
- type: nauc_mrr_at_20_std
value: -51.67122079763854
- type: nauc_mrr_at_3_diff1
value: 78.70330923531215
- type: nauc_mrr_at_3_max
value: 35.87542341241571
- type: nauc_mrr_at_3_std
value: -51.87635339239034
- type: nauc_mrr_at_5_diff1
value: 78.99544950827739
- type: nauc_mrr_at_5_max
value: 35.965125484837266
- type: nauc_mrr_at_5_std
value: -52.11029578138711
- type: nauc_ndcg_at_1000_diff1
value: 78.10303471223646
- type: nauc_ndcg_at_1000_max
value: 34.72596142439839
- type: nauc_ndcg_at_1000_std
value: -53.2962525848089
- type: nauc_ndcg_at_100_diff1
value: 78.06267135641467
- type: nauc_ndcg_at_100_max
value: 34.54419033520112
- type: nauc_ndcg_at_100_std
value: -53.5392586501254
- type: nauc_ndcg_at_10_diff1
value: 78.17567073559658
- type: nauc_ndcg_at_10_max
value: 33.787109792594144
- type: nauc_ndcg_at_10_std
value: -57.23628218329926
- type: nauc_ndcg_at_1_diff1
value: 79.5420688434198
- type: nauc_ndcg_at_1_max
value: 36.07066857529557
- type: nauc_ndcg_at_1_std
value: -48.48781152561791
- type: nauc_ndcg_at_20_diff1
value: 78.21739679352075
- type: nauc_ndcg_at_20_max
value: 34.04005309785922
- type: nauc_ndcg_at_20_std
value: -55.65001368252659
- type: nauc_ndcg_at_3_diff1
value: 77.47445949226606
- type: nauc_ndcg_at_3_max
value: 32.77007174469541
- type: nauc_ndcg_at_3_std
value: -56.260910342535894
- type: nauc_ndcg_at_5_diff1
value: 78.15994882398387
- type: nauc_ndcg_at_5_max
value: 33.11497252066444
- type: nauc_ndcg_at_5_std
value: -58.346472568678664
- type: nauc_precision_at_1000_diff1
value: -45.22108856190449
- type: nauc_precision_at_1000_max
value: -3.769158876252231
- type: nauc_precision_at_1000_std
value: 43.723870330086925
- type: nauc_precision_at_100_diff1
value: -45.23758967194308
- type: nauc_precision_at_100_max
value: -4.363166810337138
- type: nauc_precision_at_100_std
value: 42.94820379534783
- type: nauc_precision_at_10_diff1
value: -40.752163951230585
- type: nauc_precision_at_10_max
value: -1.6169274191392247
- type: nauc_precision_at_10_std
value: 29.249486658726266
- type: nauc_precision_at_1_diff1
value: 79.5420688434198
- type: nauc_precision_at_1_max
value: 36.07066857529557
- type: nauc_precision_at_1_std
value: -48.48781152561791
- type: nauc_precision_at_20_diff1
value: -43.52965345142954
- type: nauc_precision_at_20_max
value: -3.410765512192599
- type: nauc_precision_at_20_std
value: 36.265002036696245
- type: nauc_precision_at_3_diff1
value: -21.947123522182608
- type: nauc_precision_at_3_max
value: 6.055908914766165
- type: nauc_precision_at_3_std
value: 6.408586281581511
- type: nauc_precision_at_5_diff1
value: -34.130820418059265
- type: nauc_precision_at_5_max
value: 1.1109424247006825
- type: nauc_precision_at_5_std
value: 18.488513018473114
- type: nauc_recall_at_1000_diff1
value: 47.996662934260556
- type: nauc_recall_at_1000_max
value: 11.574413075464337
- type: nauc_recall_at_1000_std
value: -39.2955614699843
- type: nauc_recall_at_100_diff1
value: 64.12162282642701
- type: nauc_recall_at_100_max
value: 17.595341249984035
- type: nauc_recall_at_100_std
value: -74.41045136381057
- type: nauc_recall_at_10_diff1
value: 75.16961616005102
- type: nauc_recall_at_10_max
value: 28.68309207235788
- type: nauc_recall_at_10_std
value: -82.81198733010936
- type: nauc_recall_at_1_diff1
value: 81.43018798912361
- type: nauc_recall_at_1_max
value: 25.19409927583946
- type: nauc_recall_at_1_std
value: -48.22311263550707
- type: nauc_recall_at_20_diff1
value: 75.94655772120838
- type: nauc_recall_at_20_max
value: 26.033082267707137
- type: nauc_recall_at_20_std
value: -87.8035845729173
- type: nauc_recall_at_3_diff1
value: 75.18135051463966
- type: nauc_recall_at_3_max
value: 25.829788998048713
- type: nauc_recall_at_3_std
value: -66.40001628991527
- type: nauc_recall_at_5_diff1
value: 75.32388475941752
- type: nauc_recall_at_5_max
value: 26.600470217631152
- type: nauc_recall_at_5_std
value: -76.75029218302441
- type: ndcg_at_1
value: 83.06
- type: ndcg_at_10
value: 89.676
- type: ndcg_at_100
value: 90.745
- type: ndcg_at_1000
value: 90.802
- type: ndcg_at_20
value: 90.293
- type: ndcg_at_3
value: 87.01299999999999
- type: ndcg_at_5
value: 88.578
- type: precision_at_1
value: 83.06
- type: precision_at_10
value: 13.599
- type: precision_at_100
value: 1.54
- type: precision_at_1000
value: 0.157
- type: precision_at_20
value: 7.2139999999999995
- type: precision_at_3
value: 38.067
- type: precision_at_5
value: 25.06
- type: recall_at_1
value: 72.103
- type: recall_at_10
value: 96.269
- type: recall_at_100
value: 99.776
- type: recall_at_1000
value: 99.995
- type: recall_at_20
value: 98.20400000000001
- type: recall_at_3
value: 88.59700000000001
- type: recall_at_5
value: 93.015
task:
type: Retrieval
- dataset:
config: default
name: MTEB RedditClustering
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
split: test
type: mteb/reddit-clustering
metrics:
- type: main_score
value: 57.6315484268519
- type: v_measure
value: 57.6315484268519
- type: v_measure_std
value: 4.96160605448604
task:
type: Clustering
- dataset:
config: default
name: MTEB RedditClusteringP2P
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
split: test
type: mteb/reddit-clustering-p2p
metrics:
- type: main_score
value: 65.10459556169661
- type: v_measure
value: 65.10459556169661
- type: v_measure_std
value: 12.297830143436506
task:
type: Clustering
- dataset:
config: default
name: MTEB SCIDOCS
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
split: test
type: mteb/scidocs
metrics:
- type: main_score
value: 20.241
- type: map_at_1
value: 4.585
- type: map_at_10
value: 12.179
- type: map_at_100
value: 14.185
- type: map_at_1000
value: 14.485999999999999
- type: map_at_20
value: 13.211
- type: map_at_3
value: 8.671
- type: map_at_5
value: 10.312000000000001
- type: mrr_at_1
value: 22.7
- type: mrr_at_10
value: 33.75805555555551
- type: mrr_at_100
value: 34.817297940294345
- type: mrr_at_1000
value: 34.883397077676406
- type: mrr_at_20
value: 34.38700212283411
- type: mrr_at_3
value: 30.483333333333306
- type: mrr_at_5
value: 32.408333333333275
- type: nauc_map_at_1000_diff1
value: 14.799522136525983
- type: nauc_map_at_1000_max
value: 34.787460217244785
- type: nauc_map_at_1000_std
value: 18.09344563882231
- type: nauc_map_at_100_diff1
value: 14.768945434423111
- type: nauc_map_at_100_max
value: 34.7296008481421
- type: nauc_map_at_100_std
value: 17.862302470008842
- type: nauc_map_at_10_diff1
value: 14.144901666255635
- type: nauc_map_at_10_max
value: 32.717524928702204
- type: nauc_map_at_10_std
value: 14.61297873647561
- type: nauc_map_at_1_diff1
value: 24.110400950369463
- type: nauc_map_at_1_max
value: 28.717709149236846
- type: nauc_map_at_1_std
value: 9.47019097868293
- type: nauc_map_at_20_diff1
value: 14.60910237598006
- type: nauc_map_at_20_max
value: 34.41168874995483
- type: nauc_map_at_20_std
value: 16.8281730049661
- type: nauc_map_at_3_diff1
value: 16.927638840219913
- type: nauc_map_at_3_max
value: 30.943529346638215
- type: nauc_map_at_3_std
value: 8.770011702871889
- type: nauc_map_at_5_diff1
value: 15.149404949142397
- type: nauc_map_at_5_max
value: 32.21505246043176
- type: nauc_map_at_5_std
value: 11.327982631457365
- type: nauc_mrr_at_1000_diff1
value: 20.74353214383309
- type: nauc_mrr_at_1000_max
value: 32.03632971500104
- type: nauc_mrr_at_1000_std
value: 13.888511855973434
- type: nauc_mrr_at_100_diff1
value: 20.729669159574993
- type: nauc_mrr_at_100_max
value: 32.04616144275277
- type: nauc_mrr_at_100_std
value: 13.909503435758552
- type: nauc_mrr_at_10_diff1
value: 20.68902799696533
- type: nauc_mrr_at_10_max
value: 32.06338386152125
- type: nauc_mrr_at_10_std
value: 13.774587429590262
- type: nauc_mrr_at_1_diff1
value: 23.923563127598772
- type: nauc_mrr_at_1_max
value: 28.66045286040102
- type: nauc_mrr_at_1_std
value: 9.324543818990804
- type: nauc_mrr_at_20_diff1
value: 20.75062648249425
- type: nauc_mrr_at_20_max
value: 32.07720087059192
- type: nauc_mrr_at_20_std
value: 13.99626011275507
- type: nauc_mrr_at_3_diff1
value: 21.28016610687942
- type: nauc_mrr_at_3_max
value: 31.378222612242958
- type: nauc_mrr_at_3_std
value: 11.873532774618438
- type: nauc_mrr_at_5_diff1
value: 20.553867571063165
- type: nauc_mrr_at_5_max
value: 32.0086355849153
- type: nauc_mrr_at_5_std
value: 13.390002782582572
- type: nauc_ndcg_at_1000_diff1
value: 16.18725835208729
- type: nauc_ndcg_at_1000_max
value: 36.31956949239469
- type: nauc_ndcg_at_1000_std
value: 24.60962249502986
- type: nauc_ndcg_at_100_diff1
value: 16.080952256468766
- type: nauc_ndcg_at_100_max
value: 36.836773125169934
- type: nauc_ndcg_at_100_std
value: 23.486496647173155
- type: nauc_ndcg_at_10_diff1
value: 14.992050388748346
- type: nauc_ndcg_at_10_max
value: 33.69147398978967
- type: nauc_ndcg_at_10_std
value: 17.50282505569243
- type: nauc_ndcg_at_1_diff1
value: 23.923563127598772
- type: nauc_ndcg_at_1_max
value: 28.66045286040102
- type: nauc_ndcg_at_1_std
value: 9.324543818990804
- type: nauc_ndcg_at_20_diff1
value: 15.823547784233455
- type: nauc_ndcg_at_20_max
value: 36.18197091556912
- type: nauc_ndcg_at_20_std
value: 20.836130350813587
- type: nauc_ndcg_at_3_diff1
value: 17.463404815086445
- type: nauc_ndcg_at_3_max
value: 31.775390145640543
- type: nauc_ndcg_at_3_std
value: 10.613295919918224
- type: nauc_ndcg_at_5_diff1
value: 15.58999290484695
- type: nauc_ndcg_at_5_max
value: 32.98927404083336
- type: nauc_ndcg_at_5_std
value: 13.95090164575397
- type: nauc_precision_at_1000_diff1
value: 8.606689567686072
- type: nauc_precision_at_1000_max
value: 25.80568112038825
- type: nauc_precision_at_1000_std
value: 33.49354016345421
- type: nauc_precision_at_100_diff1
value: 11.096364034281708
- type: nauc_precision_at_100_max
value: 33.095554194808315
- type: nauc_precision_at_100_std
value: 30.31514346435903
- type: nauc_precision_at_10_diff1
value: 10.362661293325996
- type: nauc_precision_at_10_max
value: 32.23480074406134
- type: nauc_precision_at_10_std
value: 21.320659854598354
- type: nauc_precision_at_1_diff1
value: 23.923563127598772
- type: nauc_precision_at_1_max
value: 28.66045286040102
- type: nauc_precision_at_1_std
value: 9.324543818990804
- type: nauc_precision_at_20_diff1
value: 11.731217258112276
- type: nauc_precision_at_20_max
value: 35.49265680709476
- type: nauc_precision_at_20_std
value: 26.68721816769851
- type: nauc_precision_at_3_diff1
value: 14.622634083058628
- type: nauc_precision_at_3_max
value: 32.8256707695311
- type: nauc_precision_at_3_std
value: 11.441812061728767
- type: nauc_precision_at_5_diff1
value: 11.382590357991592
- type: nauc_precision_at_5_max
value: 33.40649468969605
- type: nauc_precision_at_5_std
value: 16.422568951127378
- type: nauc_recall_at_1000_diff1
value: 8.277183806243393
- type: nauc_recall_at_1000_max
value: 25.520354250846594
- type: nauc_recall_at_1000_std
value: 34.48676735616856
- type: nauc_recall_at_100_diff1
value: 10.8973527517937
- type: nauc_recall_at_100_max
value: 32.78606622733229
- type: nauc_recall_at_100_std
value: 30.54756167683916
- type: nauc_recall_at_10_diff1
value: 10.241195369539595
- type: nauc_recall_at_10_max
value: 31.93427995053164
- type: nauc_recall_at_10_std
value: 21.22066565209421
- type: nauc_recall_at_1_diff1
value: 24.110400950369463
- type: nauc_recall_at_1_max
value: 28.717709149236846
- type: nauc_recall_at_1_std
value: 9.47019097868293
- type: nauc_recall_at_20_diff1
value: 11.486528161594357
- type: nauc_recall_at_20_max
value: 35.08150781519915
- type: nauc_recall_at_20_std
value: 26.533619286721965
- type: nauc_recall_at_3_diff1
value: 14.409769092274422
- type: nauc_recall_at_3_max
value: 32.60821765433334
- type: nauc_recall_at_3_std
value: 11.348744265520075
- type: nauc_recall_at_5_diff1
value: 11.156286383427009
- type: nauc_recall_at_5_max
value: 33.060053009570325
- type: nauc_recall_at_5_std
value: 16.305557433000203
- type: ndcg_at_1
value: 22.7
- type: ndcg_at_10
value: 20.241
- type: ndcg_at_100
value: 28.005000000000003
- type: ndcg_at_1000
value: 33.337
- type: ndcg_at_20
value: 23.035
- type: ndcg_at_3
value: 19.225
- type: ndcg_at_5
value: 16.73
- type: precision_at_1
value: 22.7
- type: precision_at_10
value: 10.58
- type: precision_at_100
value: 2.176
- type: precision_at_1000
value: 0.345
- type: precision_at_20
value: 6.9
- type: precision_at_3
value: 18.2
- type: precision_at_5
value: 14.799999999999999
- type: recall_at_1
value: 4.585
- type: recall_at_10
value: 21.462
- type: recall_at_100
value: 44.196999999999996
- type: recall_at_1000
value: 70.1
- type: recall_at_20
value: 28.006999999999998
- type: recall_at_3
value: 11.078000000000001
- type: recall_at_5
value: 15.018
task:
type: Retrieval
- dataset:
config: default
name: MTEB SICK-R
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
split: test
type: mteb/sickr-sts
metrics:
- type: cosine_pearson
value: 84.36926725932263
- type: cosine_spearman
value: 79.92986896006748
- type: euclidean_pearson
value: 81.60738350267255
- type: euclidean_spearman
value: 79.92986857077926
- type: main_score
value: 79.92986896006748
- type: manhattan_pearson
value: 81.5923069536872
- type: manhattan_spearman
value: 79.73172626220187
- type: pearson
value: 84.36926725932263
- type: spearman
value: 79.92986896006748
task:
type: STS
- dataset:
config: default
name: MTEB STS12
revision: a0d554a64d88156834ff5ae9920b964011b16384
split: test
type: mteb/sts12-sts
metrics:
- type: cosine_pearson
value: 85.34145297379273
- type: cosine_spearman
value: 76.66847347731301
- type: euclidean_pearson
value: 81.43408805079034
- type: euclidean_spearman
value: 76.6680945379484
- type: main_score
value: 76.66847347731301
- type: manhattan_pearson
value: 81.69812210080966
- type: manhattan_spearman
value: 77.00962684551284
- type: pearson
value: 85.34145297379273
- type: spearman
value: 76.66847347731301
task:
type: STS
- dataset:
config: default
name: MTEB STS13
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
split: test
type: mteb/sts13-sts
metrics:
- type: cosine_pearson
value: 84.5234167909779
- type: cosine_spearman
value: 84.86841413445535
- type: euclidean_pearson
value: 84.17741655183796
- type: euclidean_spearman
value: 84.86841405901674
- type: main_score
value: 84.86841413445535
- type: manhattan_pearson
value: 84.15491829147086
- type: manhattan_spearman
value: 84.93066841323679
- type: pearson
value: 84.5234167909779
- type: spearman
value: 84.86841413445535
task:
type: STS
- dataset:
config: default
name: MTEB STS14
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
split: test
type: mteb/sts14-sts
metrics:
- type: cosine_pearson
value: 83.42559938022957
- type: cosine_spearman
value: 80.10636060670153
- type: euclidean_pearson
value: 82.31695543050009
- type: euclidean_spearman
value: 80.10637586616073
- type: main_score
value: 80.10636060670153
- type: manhattan_pearson
value: 82.15731596876633
- type: manhattan_spearman
value: 80.02499151302123
- type: pearson
value: 83.42559938022957
- type: spearman
value: 80.10636060670153
task:
type: STS
- dataset:
config: default
name: MTEB STS15
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
split: test
type: mteb/sts15-sts
metrics:
- type: cosine_pearson
value: 87.98708135949613
- type: cosine_spearman
value: 88.69670049389599
- type: euclidean_pearson
value: 87.73091071499016
- type: euclidean_spearman
value: 88.69669966606001
- type: main_score
value: 88.69670049389599
- type: manhattan_pearson
value: 87.52276751048582
- type: manhattan_spearman
value: 88.5214230554986
- type: pearson
value: 87.98708135949613
- type: spearman
value: 88.69670049389599
task:
type: STS
- dataset:
config: default
name: MTEB STS16
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
split: test
type: mteb/sts16-sts
metrics:
- type: cosine_pearson
value: 83.38330950325803
- type: cosine_spearman
value: 84.62194600310691
- type: euclidean_pearson
value: 83.4921014845454
- type: euclidean_spearman
value: 84.62194539439683
- type: main_score
value: 84.62194600310691
- type: manhattan_pearson
value: 83.27754689500482
- type: manhattan_spearman
value: 84.37797144965002
- type: pearson
value: 83.38330950325803
- type: spearman
value: 84.62194600310691
task:
type: STS
- dataset:
config: en-ar
name: MTEB STS17 (en-ar)
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
split: test
type: mteb/sts17-crosslingual-sts
metrics:
- type: cosine_pearson
value: 64.3970938916265
- type: cosine_spearman
value: 64.20857293171593
- type: euclidean_pearson
value: 64.70484646950464
- type: euclidean_spearman
value: 64.20857293171593
- type: main_score
value: 64.20857293171593
- type: manhattan_pearson
value: 63.61585574374933
- type: manhattan_spearman
value: 62.52898030084564
- type: pearson
value: 64.3970938916265
- type: spearman
value: 64.20857293171593
task:
type: STS
- dataset:
config: en-de
name: MTEB STS17 (en-de)
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
split: test
type: mteb/sts17-crosslingual-sts
metrics:
- type: cosine_pearson
value: 78.3035787778662
- type: cosine_spearman
value: 78.85326338385796
- type: euclidean_pearson
value: 78.59090666313418
- type: euclidean_spearman
value: 78.85326338385796
- type: main_score
value: 78.85326338385796
- type: manhattan_pearson
value: 78.4961035895383
- type: manhattan_spearman
value: 78.42104373908565
- type: pearson
value: 78.3035787778662
- type: spearman
value: 78.85326338385796
task:
type: STS
- dataset:
config: en-en
name: MTEB STS17 (en-en)
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
split: test
type: mteb/sts17-crosslingual-sts
metrics:
- type: cosine_pearson
value: 88.20922919233338
- type: cosine_spearman
value: 87.94347302365394
- type: euclidean_pearson
value: 87.98965741145625
- type: euclidean_spearman
value: 87.94347302365394
- type: main_score
value: 87.94347302365394
- type: manhattan_pearson
value: 87.94636580768939
- type: manhattan_spearman
value: 87.82077364455115
- type: pearson
value: 88.20922919233338
- type: spearman
value: 87.94347302365394
task:
type: STS
- dataset:
config: en-tr
name: MTEB STS17 (en-tr)
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
split: test
type: mteb/sts17-crosslingual-sts
metrics:
- type: cosine_pearson
value: 58.50589296592958
- type: cosine_spearman
value: 57.045627811103
- type: euclidean_pearson
value: 58.54066429107441
- type: euclidean_spearman
value: 57.045627811103
- type: main_score
value: 57.045627811103
- type: manhattan_pearson
value: 57.77923152721202
- type: manhattan_spearman
value: 55.832507020505886
- type: pearson
value: 58.50589296592958
- type: spearman
value: 57.045627811103
task:
type: STS
- dataset:
config: es-en
name: MTEB STS17 (es-en)
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
split: test
type: mteb/sts17-crosslingual-sts
metrics:
- type: cosine_pearson
value: 79.01593420315352
- type: cosine_spearman
value: 79.86309144376173
- type: euclidean_pearson
value: 78.85136309334905
- type: euclidean_spearman
value: 79.86309144376173
- type: main_score
value: 79.86309144376173
- type: manhattan_pearson
value: 78.87419337945624
- type: manhattan_spearman
value: 80.0980944874198
- type: pearson
value: 79.01593420315352
- type: spearman
value: 79.86309144376173
task:
type: STS
- dataset:
config: fr-en
name: MTEB STS17 (fr-en)
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
split: test
type: mteb/sts17-crosslingual-sts
metrics:
- type: cosine_pearson
value: 79.67432399995894
- type: cosine_spearman
value: 79.12303288340163
- type: euclidean_pearson
value: 79.721668775324
- type: euclidean_spearman
value: 79.12303288340163
- type: main_score
value: 79.12303288340163
- type: manhattan_pearson
value: 79.33800466555394
- type: manhattan_spearman
value: 78.30603645374914
- type: pearson
value: 79.67432399995894
- type: spearman
value: 79.12303288340163
task:
type: STS
- dataset:
config: it-en
name: MTEB STS17 (it-en)
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
split: test
type: mteb/sts17-crosslingual-sts
metrics:
- type: cosine_pearson
value: 78.92024449526863
- type: cosine_spearman
value: 79.06471992660374
- type: euclidean_pearson
value: 78.85388657114522
- type: euclidean_spearman
value: 79.06471992660374
- type: main_score
value: 79.06471992660374
- type: manhattan_pearson
value: 78.56658857806735
- type: manhattan_spearman
value: 78.5908742980949
- type: pearson
value: 78.92024449526863
- type: spearman
value: 79.06471992660374
task:
type: STS
- dataset:
config: nl-en
name: MTEB STS17 (nl-en)
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
split: test
type: mteb/sts17-crosslingual-sts
metrics:
- type: cosine_pearson
value: 76.64708509569135
- type: cosine_spearman
value: 75.76775070804274
- type: euclidean_pearson
value: 76.69358579979829
- type: euclidean_spearman
value: 75.76775070804274
- type: main_score
value: 75.76775070804274
- type: manhattan_pearson
value: 76.28750520391006
- type: manhattan_spearman
value: 75.30493726054976
- type: pearson
value: 76.64708509569135
- type: spearman
value: 75.76775070804274
task:
type: STS
- dataset:
config: en
name: MTEB STS22 (en)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 69.07403446182418
- type: cosine_spearman
value: 68.99668192503603
- type: euclidean_pearson
value: 70.82685591260719
- type: euclidean_spearman
value: 68.99668192503603
- type: main_score
value: 68.99668192503603
- type: manhattan_pearson
value: 70.94201332797343
- type: manhattan_spearman
value: 68.98821773218067
- type: pearson
value: 69.07403446182418
- type: spearman
value: 68.99668192503603
task:
type: STS
- dataset:
config: de-en
name: MTEB STS22 (de-en)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 65.95032307094047
- type: cosine_spearman
value: 63.15571038787516
- type: euclidean_pearson
value: 68.31815956207403
- type: euclidean_spearman
value: 63.15571038787516
- type: main_score
value: 63.15571038787516
- type: manhattan_pearson
value: 69.57471678363024
- type: manhattan_spearman
value: 63.78770917466211
- type: pearson
value: 65.95032307094047
- type: spearman
value: 63.15571038787516
task:
type: STS
- dataset:
config: es-en
name: MTEB STS22 (es-en)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 76.57985370197574
- type: cosine_spearman
value: 78.61171041249278
- type: euclidean_pearson
value: 77.64916374513423
- type: euclidean_spearman
value: 78.61182871621082
- type: main_score
value: 78.61171041249278
- type: manhattan_pearson
value: 79.45516154600577
- type: manhattan_spearman
value: 79.81770224017768
- type: pearson
value: 76.57985370197574
- type: spearman
value: 78.61171041249278
task:
type: STS
- dataset:
config: pl-en
name: MTEB STS22 (pl-en)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 78.66979183071325
- type: cosine_spearman
value: 76.74899167835852
- type: euclidean_pearson
value: 78.89780095637012
- type: euclidean_spearman
value: 76.74899167835852
- type: main_score
value: 76.74899167835852
- type: manhattan_pearson
value: 79.18536398264527
- type: manhattan_spearman
value: 77.8533686712189
- type: pearson
value: 78.66979183071325
- type: spearman
value: 76.74899167835852
task:
type: STS
- dataset:
config: zh-en
name: MTEB STS22 (zh-en)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 75.65018415517595
- type: cosine_spearman
value: 74.96983110528109
- type: euclidean_pearson
value: 77.0199252096022
- type: euclidean_spearman
value: 75.05313744822759
- type: main_score
value: 74.96983110528109
- type: manhattan_pearson
value: 77.28747618528581
- type: manhattan_spearman
value: 74.95188542213391
- type: pearson
value: 75.65018415517595
- type: spearman
value: 74.96983110528109
task:
type: STS
- dataset:
config: default
name: MTEB STSBenchmark
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
split: test
type: mteb/stsbenchmark-sts
metrics:
- type: cosine_pearson
value: 84.89952732150343
- type: cosine_spearman
value: 86.06896054399277
- type: euclidean_pearson
value: 85.69195853460913
- type: euclidean_spearman
value: 86.06896054399277
- type: main_score
value: 86.06896054399277
- type: manhattan_pearson
value: 85.56550688049849
- type: manhattan_spearman
value: 85.96422284827248
- type: pearson
value: 84.89952732150343
- type: spearman
value: 86.06896054399277
task:
type: STS
- dataset:
config: default
name: MTEB SciDocsRR
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
split: test
type: mteb/scidocs-reranking
metrics:
- type: main_score
value: 81.89447973144247
- type: map
value: 81.89447973144247
- type: mrr
value: 95.02511830943203
- type: nAUC_map_diff1
value: 3.3432260393863147
- type: nAUC_map_max
value: 54.252667154593915
- type: nAUC_map_std
value: 68.86046114121041
- type: nAUC_mrr_diff1
value: 48.53496653582678
- type: nAUC_mrr_max
value: 85.71793394587537
- type: nAUC_mrr_std
value: 80.13736591117815
task:
type: Reranking
- dataset:
config: default
name: MTEB SciFact
revision: 0228b52cf27578f30900b9e5271d331663a030d7
split: test
type: mteb/scifact
metrics:
- type: main_score
value: 73.055
- type: map_at_1
value: 57.760999999999996
- type: map_at_10
value: 68.73700000000001
- type: map_at_100
value: 69.248
- type: map_at_1000
value: 69.271
- type: map_at_20
value: 69.059
- type: map_at_3
value: 66.235
- type: map_at_5
value: 67.843
- type: mrr_at_1
value: 60.66666666666667
- type: mrr_at_10
value: 69.7063492063492
- type: mrr_at_100
value: 70.13874332314896
- type: mrr_at_1000
value: 70.16105806682286
- type: mrr_at_20
value: 69.97925265738732
- type: mrr_at_3
value: 68.0
- type: mrr_at_5
value: 69.16666666666667
- type: nauc_map_at_1000_diff1
value: 70.43790903123511
- type: nauc_map_at_1000_max
value: 54.58438799194478
- type: nauc_map_at_1000_std
value: -2.3233833924225875
- type: nauc_map_at_100_diff1
value: 70.43647328927425
- type: nauc_map_at_100_max
value: 54.60393233697298
- type: nauc_map_at_100_std
value: -2.296496281894915
- type: nauc_map_at_10_diff1
value: 70.36871958614046
- type: nauc_map_at_10_max
value: 54.67011099551128
- type: nauc_map_at_10_std
value: -2.7009625352656426
- type: nauc_map_at_1_diff1
value: 74.99352374397856
- type: nauc_map_at_1_max
value: 50.00344836993502
- type: nauc_map_at_1_std
value: -8.698012201837718
- type: nauc_map_at_20_diff1
value: 70.28211747093155
- type: nauc_map_at_20_max
value: 54.553120080500996
- type: nauc_map_at_20_std
value: -2.5857819931480246
- type: nauc_map_at_3_diff1
value: 71.42267536616798
- type: nauc_map_at_3_max
value: 54.14853872152404
- type: nauc_map_at_3_std
value: -3.3672073293896654
- type: nauc_map_at_5_diff1
value: 70.5522364898511
- type: nauc_map_at_5_max
value: 53.82183956625946
- type: nauc_map_at_5_std
value: -3.8112884869905086
- type: nauc_mrr_at_1000_diff1
value: 70.31304494231345
- type: nauc_mrr_at_1000_max
value: 55.634864405262206
- type: nauc_mrr_at_1000_std
value: -0.9073602724006471
- type: nauc_mrr_at_100_diff1
value: 70.31169722312256
- type: nauc_mrr_at_100_max
value: 55.653794547616464
- type: nauc_mrr_at_100_std
value: -0.8812919296154862
- type: nauc_mrr_at_10_diff1
value: 70.20728957800745
- type: nauc_mrr_at_10_max
value: 55.82409315449895
- type: nauc_mrr_at_10_std
value: -1.075930464035488
- type: nauc_mrr_at_1_diff1
value: 74.42858144028513
- type: nauc_mrr_at_1_max
value: 54.28150936595816
- type: nauc_mrr_at_1_std
value: -2.2125887288127233
- type: nauc_mrr_at_20_diff1
value: 70.12751951178618
- type: nauc_mrr_at_20_max
value: 55.646395586345186
- type: nauc_mrr_at_20_std
value: -1.0679937201638918
- type: nauc_mrr_at_3_diff1
value: 70.83694438588687
- type: nauc_mrr_at_3_max
value: 56.13927732102838
- type: nauc_mrr_at_3_std
value: -0.7791089874218045
- type: nauc_mrr_at_5_diff1
value: 70.10204767208957
- type: nauc_mrr_at_5_max
value: 55.42591427914719
- type: nauc_mrr_at_5_std
value: -1.4764758924309185
- type: nauc_ndcg_at_1000_diff1
value: 69.51940238503862
- type: nauc_ndcg_at_1000_max
value: 55.49401934363413
- type: nauc_ndcg_at_1000_std
value: -0.6435033619960048
- type: nauc_ndcg_at_100_diff1
value: 69.42773837942757
- type: nauc_ndcg_at_100_max
value: 56.08697787789855
- type: nauc_ndcg_at_100_std
value: 0.34308668749330745
- type: nauc_ndcg_at_10_diff1
value: 68.78081835695725
- type: nauc_ndcg_at_10_max
value: 56.23279741387973
- type: nauc_ndcg_at_10_std
value: -1.6400901664189715
- type: nauc_ndcg_at_1_diff1
value: 74.42858144028513
- type: nauc_ndcg_at_1_max
value: 54.28150936595816
- type: nauc_ndcg_at_1_std
value: -2.2125887288127233
- type: nauc_ndcg_at_20_diff1
value: 68.4553683006882
- type: nauc_ndcg_at_20_max
value: 55.74277759291753
- type: nauc_ndcg_at_20_std
value: -1.3736010194196164
- type: nauc_ndcg_at_3_diff1
value: 70.04684155763836
- type: nauc_ndcg_at_3_max
value: 56.23593815133674
- type: nauc_ndcg_at_3_std
value: -1.2617917976885795
- type: nauc_ndcg_at_5_diff1
value: 68.88128875602627
- type: nauc_ndcg_at_5_max
value: 54.62301571910928
- type: nauc_ndcg_at_5_std
value: -3.5841002369184762
- type: nauc_precision_at_1000_diff1
value: -27.57874055213611
- type: nauc_precision_at_1000_max
value: 10.69254261980662
- type: nauc_precision_at_1000_std
value: 41.58262996451408
- type: nauc_precision_at_100_diff1
value: -12.950536107683561
- type: nauc_precision_at_100_max
value: 21.16371708839723
- type: nauc_precision_at_100_std
value: 40.951527751953684
- type: nauc_precision_at_10_diff1
value: 8.091679678786514
- type: nauc_precision_at_10_max
value: 33.20925347609484
- type: nauc_precision_at_10_std
value: 25.770968101717557
- type: nauc_precision_at_1_diff1
value: 74.42858144028513
- type: nauc_precision_at_1_max
value: 54.28150936595816
- type: nauc_precision_at_1_std
value: -2.2125887288127233
- type: nauc_precision_at_20_diff1
value: -1.0200005991193168
- type: nauc_precision_at_20_max
value: 27.432174703186323
- type: nauc_precision_at_20_std
value: 29.095729277961407
- type: nauc_precision_at_3_diff1
value: 38.35291080418228
- type: nauc_precision_at_3_max
value: 49.66103007615846
- type: nauc_precision_at_3_std
value: 20.088808571059758
- type: nauc_precision_at_5_diff1
value: 21.518579003608927
- type: nauc_precision_at_5_max
value: 38.7296114841025
- type: nauc_precision_at_5_std
value: 19.47619911691762
- type: nauc_recall_at_1000_diff1
value: 42.25023342670368
- type: nauc_recall_at_1000_max
value: 21.825396825396062
- type: nauc_recall_at_1000_std
value: 33.84687208216713
- type: nauc_recall_at_100_diff1
value: 62.536570183629024
- type: nauc_recall_at_100_max
value: 70.01867413632091
- type: nauc_recall_at_100_std
value: 37.06504824151885
- type: nauc_recall_at_10_diff1
value: 61.1644854039766
- type: nauc_recall_at_10_max
value: 61.074517296862396
- type: nauc_recall_at_10_std
value: -0.5423227215261704
- type: nauc_recall_at_1_diff1
value: 74.99352374397856
- type: nauc_recall_at_1_max
value: 50.00344836993502
- type: nauc_recall_at_1_std
value: -8.698012201837718
- type: nauc_recall_at_20_diff1
value: 56.37978951869162
- type: nauc_recall_at_20_max
value: 58.84099235231809
- type: nauc_recall_at_20_std
value: 1.2224630005733186
- type: nauc_recall_at_3_diff1
value: 66.74850639308315
- type: nauc_recall_at_3_max
value: 58.157377341361084
- type: nauc_recall_at_3_std
value: -1.8661963986343983
- type: nauc_recall_at_5_diff1
value: 61.806012486501395
- type: nauc_recall_at_5_max
value: 54.41470702166602
- type: nauc_recall_at_5_std
value: -7.114468350278654
- type: ndcg_at_1
value: 60.667
- type: ndcg_at_10
value: 73.055
- type: ndcg_at_100
value: 75.312
- type: ndcg_at_1000
value: 75.874
- type: ndcg_at_20
value: 74.166
- type: ndcg_at_3
value: 69.211
- type: ndcg_at_5
value: 71.438
- type: precision_at_1
value: 60.667
- type: precision_at_10
value: 9.700000000000001
- type: precision_at_100
value: 1.08
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_20
value: 5.083
- type: precision_at_3
value: 27.444000000000003
- type: precision_at_5
value: 18.0
- type: recall_at_1
value: 57.760999999999996
- type: recall_at_10
value: 84.88900000000001
- type: recall_at_100
value: 95.0
- type: recall_at_1000
value: 99.333
- type: recall_at_20
value: 89.22200000000001
- type: recall_at_3
value: 74.933
- type: recall_at_5
value: 80.511
task:
type: Retrieval
- dataset:
config: default
name: MTEB SprintDuplicateQuestions
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
split: test
type: mteb/sprintduplicatequestions-pairclassification
metrics:
- type: cosine_accuracy
value: 99.8029702970297
- type: cosine_accuracy_threshold
value: 74.40159320831299
- type: cosine_ap
value: 94.58107371506443
- type: cosine_f1
value: 90.01505268439539
- type: cosine_f1_threshold
value: 74.40159320831299
- type: cosine_precision
value: 90.33232628398792
- type: cosine_recall
value: 89.7
- type: dot_accuracy
value: 99.8029702970297
- type: dot_accuracy_threshold
value: 74.40159320831299
- type: dot_ap
value: 94.58108694234896
- type: dot_f1
value: 90.01505268439539
- type: dot_f1_threshold
value: 74.40159320831299
- type: dot_precision
value: 90.33232628398792
- type: dot_recall
value: 89.7
- type: euclidean_accuracy
value: 99.8029702970297
- type: euclidean_accuracy_threshold
value: 71.55194282531738
- type: euclidean_ap
value: 94.58107371506446
- type: euclidean_f1
value: 90.01505268439539
- type: euclidean_f1_threshold
value: 71.55194282531738
- type: euclidean_precision
value: 90.33232628398792
- type: euclidean_recall
value: 89.7
- type: main_score
value: 94.91386698713322
- type: manhattan_accuracy
value: 99.8108910891089
- type: manhattan_accuracy_threshold
value: 1696.7340469360352
- type: manhattan_ap
value: 94.91386698713322
- type: manhattan_f1
value: 90.4927824788452
- type: manhattan_f1_threshold
value: 1696.7340469360352
- type: manhattan_precision
value: 90.08919722497522
- type: manhattan_recall
value: 90.9
- type: max_ap
value: 94.91386698713322
- type: max_f1
value: 90.4927824788452
- type: max_precision
value: 90.33232628398792
- type: max_recall
value: 90.9
- type: similarity_accuracy
value: 99.8029702970297
- type: similarity_accuracy_threshold
value: 74.40159320831299
- type: similarity_ap
value: 94.58107371506443
- type: similarity_f1
value: 90.01505268439539
- type: similarity_f1_threshold
value: 74.40159320831299
- type: similarity_precision
value: 90.33232628398792
- type: similarity_recall
value: 89.7
task:
type: PairClassification
- dataset:
config: default
name: MTEB StackExchangeClustering
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
split: test
type: mteb/stackexchange-clustering
metrics:
- type: main_score
value: 67.22104632684339
- type: v_measure
value: 67.22104632684339
- type: v_measure_std
value: 4.510073189377009
task:
type: Clustering
- dataset:
config: default
name: MTEB StackExchangeClusteringP2P
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
split: test
type: mteb/stackexchange-clustering-p2p
metrics:
- type: main_score
value: 33.69502959609247
- type: v_measure
value: 33.69502959609247
- type: v_measure_std
value: 1.7351941868223697
task:
type: Clustering
- dataset:
config: default
name: MTEB StackOverflowDupQuestions
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
split: test
type: mteb/stackoverflowdupquestions-reranking
metrics:
- type: main_score
value: 49.33572386600858
- type: map
value: 49.33572386600858
- type: mrr
value: 50.25399743230625
- type: nAUC_map_diff1
value: 36.68702916524911
- type: nAUC_map_max
value: 15.78050039369413
- type: nAUC_map_std
value: 9.735729247790866
- type: nAUC_mrr_diff1
value: 36.82154498603323
- type: nAUC_mrr_max
value: 16.371339214758713
- type: nAUC_mrr_std
value: 9.929514279072379
task:
type: Reranking
- dataset:
config: default
name: MTEB SummEval
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
split: test
type: mteb/summeval
metrics:
- type: cosine_pearson
value: 28.78169000462832
- type: cosine_spearman
value: 29.152425546074824
- type: dot_pearson
value: 28.781692477370914
- type: dot_spearman
value: 29.152370579886423
- type: main_score
value: 29.152425546074824
- type: pearson
value: 28.78169000462832
- type: spearman
value: 29.152425546074824
task:
type: Summarization
- dataset:
config: default
name: MTEB TRECCOVID
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
split: test
type: mteb/trec-covid
metrics:
- type: main_score
value: 78.374
- type: map_at_1
value: 0.22100000000000003
- type: map_at_10
value: 1.9980000000000002
- type: map_at_100
value: 12.812000000000001
- type: map_at_1000
value: 31.823
- type: map_at_20
value: 3.6859999999999995
- type: map_at_3
value: 0.656
- type: map_at_5
value: 1.0670000000000002
- type: mrr_at_1
value: 84.0
- type: mrr_at_10
value: 90.56666666666666
- type: mrr_at_100
value: 90.56666666666666
- type: mrr_at_1000
value: 90.56666666666666
- type: mrr_at_20
value: 90.56666666666666
- type: mrr_at_3
value: 89.66666666666667
- type: mrr_at_5
value: 90.56666666666666
- type: nauc_map_at_1000_diff1
value: 2.825877135411271
- type: nauc_map_at_1000_max
value: 40.607799285634
- type: nauc_map_at_1000_std
value: 75.56929127733711
- type: nauc_map_at_100_diff1
value: 17.09931837591714
- type: nauc_map_at_100_max
value: 26.017672927390556
- type: nauc_map_at_100_std
value: 47.97065512030576
- type: nauc_map_at_10_diff1
value: 18.2493061824924
- type: nauc_map_at_10_max
value: 14.631430140768051
- type: nauc_map_at_10_std
value: 6.843536754351145
- type: nauc_map_at_1_diff1
value: 22.577139455591702
- type: nauc_map_at_1_max
value: 0.15518062954687648
- type: nauc_map_at_1_std
value: 4.518832555249529
- type: nauc_map_at_20_diff1
value: 13.380363593233845
- type: nauc_map_at_20_max
value: 14.364050402931303
- type: nauc_map_at_20_std
value: 14.97367017439393
- type: nauc_map_at_3_diff1
value: 15.885210137428182
- type: nauc_map_at_3_max
value: 3.562057528491576
- type: nauc_map_at_3_std
value: 2.378758614671768
- type: nauc_map_at_5_diff1
value: 14.49860277826242
- type: nauc_map_at_5_max
value: 7.729805934487601
- type: nauc_map_at_5_std
value: 1.4105962147738722
- type: nauc_mrr_at_1000_diff1
value: 56.881060817300266
- type: nauc_mrr_at_1000_max
value: 41.11734189808372
- type: nauc_mrr_at_1000_std
value: 50.43684357282267
- type: nauc_mrr_at_100_diff1
value: 56.881060817300266
- type: nauc_mrr_at_100_max
value: 41.11734189808372
- type: nauc_mrr_at_100_std
value: 50.43684357282267
- type: nauc_mrr_at_10_diff1
value: 56.881060817300266
- type: nauc_mrr_at_10_max
value: 41.11734189808372
- type: nauc_mrr_at_10_std
value: 50.43684357282267
- type: nauc_mrr_at_1_diff1
value: 58.64629356897393
- type: nauc_mrr_at_1_max
value: 32.48649975454101
- type: nauc_mrr_at_1_std
value: 43.955571919489394
- type: nauc_mrr_at_20_diff1
value: 56.881060817300266
- type: nauc_mrr_at_20_max
value: 41.11734189808372
- type: nauc_mrr_at_20_std
value: 50.43684357282267
- type: nauc_mrr_at_3_diff1
value: 53.77571146801908
- type: nauc_mrr_at_3_max
value: 45.26470680316847
- type: nauc_mrr_at_3_std
value: 53.000845308537706
- type: nauc_mrr_at_5_diff1
value: 56.881060817300266
- type: nauc_mrr_at_5_max
value: 41.11734189808372
- type: nauc_mrr_at_5_std
value: 50.43684357282267
- type: nauc_ndcg_at_1000_diff1
value: 5.706304837276804
- type: nauc_ndcg_at_1000_max
value: 40.29128039047473
- type: nauc_ndcg_at_1000_std
value: 71.00623045997143
- type: nauc_ndcg_at_100_diff1
value: 5.781640210958165
- type: nauc_ndcg_at_100_max
value: 43.91454038788984
- type: nauc_ndcg_at_100_std
value: 73.38353180392235
- type: nauc_ndcg_at_10_diff1
value: 26.9639013902839
- type: nauc_ndcg_at_10_max
value: 54.33014371697244
- type: nauc_ndcg_at_10_std
value: 47.792741117341144
- type: nauc_ndcg_at_1_diff1
value: 54.66632834306011
- type: nauc_ndcg_at_1_max
value: 30.289266683582845
- type: nauc_ndcg_at_1_std
value: 33.96599847754379
- type: nauc_ndcg_at_20_diff1
value: 17.30631583279515
- type: nauc_ndcg_at_20_max
value: 51.11318537065157
- type: nauc_ndcg_at_20_std
value: 58.77421488656353
- type: nauc_ndcg_at_3_diff1
value: 29.77344612486348
- type: nauc_ndcg_at_3_max
value: 37.42364187792375
- type: nauc_ndcg_at_3_std
value: 41.1907099151911
- type: nauc_ndcg_at_5_diff1
value: 26.050198501250804
- type: nauc_ndcg_at_5_max
value: 47.51636664318881
- type: nauc_ndcg_at_5_std
value: 42.27162971112885
- type: nauc_precision_at_1000_diff1
value: -5.147193986603446
- type: nauc_precision_at_1000_max
value: 35.2107091684719
- type: nauc_precision_at_1000_std
value: 46.18948291863976
- type: nauc_precision_at_100_diff1
value: 8.820554100487717
- type: nauc_precision_at_100_max
value: 45.45756541797819
- type: nauc_precision_at_100_std
value: 76.13204940288823
- type: nauc_precision_at_10_diff1
value: 24.200964449927067
- type: nauc_precision_at_10_max
value: 63.97368322679529
- type: nauc_precision_at_10_std
value: 51.453029793278795
- type: nauc_precision_at_1_diff1
value: 58.64629356897393
- type: nauc_precision_at_1_max
value: 32.48649975454101
- type: nauc_precision_at_1_std
value: 43.955571919489394
- type: nauc_precision_at_20_diff1
value: 9.308587936619213
- type: nauc_precision_at_20_max
value: 48.79243631270248
- type: nauc_precision_at_20_std
value: 62.069859056289864
- type: nauc_precision_at_3_diff1
value: 33.581669226830584
- type: nauc_precision_at_3_max
value: 56.22119815668209
- type: nauc_precision_at_3_std
value: 51.94572452636975
- type: nauc_precision_at_5_diff1
value: 27.412098506105657
- type: nauc_precision_at_5_max
value: 62.44729045506555
- type: nauc_precision_at_5_std
value: 44.765099619080445
- type: nauc_recall_at_1000_diff1
value: -1.1672849905619294
- type: nauc_recall_at_1000_max
value: 30.24145654488767
- type: nauc_recall_at_1000_std
value: 59.841775004234165
- type: nauc_recall_at_100_diff1
value: 14.955315589973456
- type: nauc_recall_at_100_max
value: 14.182437740698777
- type: nauc_recall_at_100_std
value: 34.85010900316272
- type: nauc_recall_at_10_diff1
value: 13.823849163501494
- type: nauc_recall_at_10_max
value: 7.576291042005819
- type: nauc_recall_at_10_std
value: 1.4227650589393714
- type: nauc_recall_at_1_diff1
value: 22.577139455591702
- type: nauc_recall_at_1_max
value: 0.15518062954687648
- type: nauc_recall_at_1_std
value: 4.518832555249529
- type: nauc_recall_at_20_diff1
value: 9.577895424349496
- type: nauc_recall_at_20_max
value: 4.326841788680218
- type: nauc_recall_at_20_std
value: 8.40592602308462
- type: nauc_recall_at_3_diff1
value: 11.099599191623701
- type: nauc_recall_at_3_max
value: 1.8660565345942584
- type: nauc_recall_at_3_std
value: -0.5969085344249611
- type: nauc_recall_at_5_diff1
value: 8.674608384913736
- type: nauc_recall_at_5_max
value: 3.730380788869587
- type: nauc_recall_at_5_std
value: -3.4877352049852024
- type: ndcg_at_1
value: 80.0
- type: ndcg_at_10
value: 78.374
- type: ndcg_at_100
value: 63.385000000000005
- type: ndcg_at_1000
value: 57.406
- type: ndcg_at_20
value: 75.795
- type: ndcg_at_3
value: 80.419
- type: ndcg_at_5
value: 80.157
- type: precision_at_1
value: 84.0
- type: precision_at_10
value: 84.0
- type: precision_at_100
value: 65.88000000000001
- type: precision_at_1000
value: 25.502000000000002
- type: precision_at_20
value: 80.30000000000001
- type: precision_at_3
value: 86.667
- type: precision_at_5
value: 86.4
- type: recall_at_1
value: 0.22100000000000003
- type: recall_at_10
value: 2.179
- type: recall_at_100
value: 15.934000000000001
- type: recall_at_1000
value: 54.458
- type: recall_at_20
value: 4.144
- type: recall_at_3
value: 0.6859999999999999
- type: recall_at_5
value: 1.1320000000000001
task:
type: Retrieval
- dataset:
config: default
name: MTEB Touche2020
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
split: test
type: mteb/touche2020
metrics:
- type: main_score
value: 28.907
- type: map_at_1
value: 2.675
- type: map_at_10
value: 12.215
- type: map_at_100
value: 18.7
- type: map_at_1000
value: 20.398
- type: map_at_20
value: 15.078
- type: map_at_3
value: 6.241
- type: map_at_5
value: 8.289
- type: mrr_at_1
value: 32.6530612244898
- type: mrr_at_10
value: 50.01133786848071
- type: mrr_at_100
value: 50.77517365675259
- type: mrr_at_1000
value: 50.77517365675259
- type: mrr_at_20
value: 50.588814902724664
- type: mrr_at_3
value: 45.578231292517
- type: mrr_at_5
value: 48.53741496598638
- type: nauc_map_at_1000_diff1
value: -5.684538294981354
- type: nauc_map_at_1000_max
value: -33.46305720843361
- type: nauc_map_at_1000_std
value: 1.9671166101260358
- type: nauc_map_at_100_diff1
value: -3.9527668773790374
- type: nauc_map_at_100_max
value: -33.547343271958304
- type: nauc_map_at_100_std
value: -1.4543726200894687
- type: nauc_map_at_10_diff1
value: -3.6912102827982975
- type: nauc_map_at_10_max
value: -37.051501400243644
- type: nauc_map_at_10_std
value: -18.58369649223091
- type: nauc_map_at_1_diff1
value: 8.542642521750217
- type: nauc_map_at_1_max
value: -42.118453460843014
- type: nauc_map_at_1_std
value: -21.4477651608444
- type: nauc_map_at_20_diff1
value: -4.1294483682157335
- type: nauc_map_at_20_max
value: -32.055300714683774
- type: nauc_map_at_20_std
value: -13.633460827906779
- type: nauc_map_at_3_diff1
value: 4.166012812499575
- type: nauc_map_at_3_max
value: -44.421760913346375
- type: nauc_map_at_3_std
value: -22.934729762627693
- type: nauc_map_at_5_diff1
value: 5.0705280599427285
- type: nauc_map_at_5_max
value: -39.880207516910055
- type: nauc_map_at_5_std
value: -19.089070592204358
- type: nauc_mrr_at_1000_diff1
value: 8.136502099178854
- type: nauc_mrr_at_1000_max
value: -54.053135657703564
- type: nauc_mrr_at_1000_std
value: 0.8410793475356224
- type: nauc_mrr_at_100_diff1
value: 8.136502099178854
- type: nauc_mrr_at_100_max
value: -54.053135657703564
- type: nauc_mrr_at_100_std
value: 0.8410793475356224
- type: nauc_mrr_at_10_diff1
value: 7.021058071372796
- type: nauc_mrr_at_10_max
value: -55.576671480124475
- type: nauc_mrr_at_10_std
value: 2.659844175871393
- type: nauc_mrr_at_1_diff1
value: 21.763874961879942
- type: nauc_mrr_at_1_max
value: -42.10185605661237
- type: nauc_mrr_at_1_std
value: -6.492292167140558
- type: nauc_mrr_at_20_diff1
value: 8.441891181402887
- type: nauc_mrr_at_20_max
value: -54.466795585812235
- type: nauc_mrr_at_20_std
value: 0.916114699709143
- type: nauc_mrr_at_3_diff1
value: 7.551389256661414
- type: nauc_mrr_at_3_max
value: -46.97364074837694
- type: nauc_mrr_at_3_std
value: 1.0411397370775466
- type: nauc_mrr_at_5_diff1
value: 5.235804734715955
- type: nauc_mrr_at_5_max
value: -54.37509495435838
- type: nauc_mrr_at_5_std
value: 2.779654633655762
- type: nauc_ndcg_at_1000_diff1
value: -15.397449719696779
- type: nauc_ndcg_at_1000_max
value: -43.619552110596665
- type: nauc_ndcg_at_1000_std
value: 26.3557588044005
- type: nauc_ndcg_at_100_diff1
value: -8.064551008407328
- type: nauc_ndcg_at_100_max
value: -45.62898014606384
- type: nauc_ndcg_at_100_std
value: 19.02252139372526
- type: nauc_ndcg_at_10_diff1
value: -4.128778098656938
- type: nauc_ndcg_at_10_max
value: -47.533595647961825
- type: nauc_ndcg_at_10_std
value: -3.3387983790901616
- type: nauc_ndcg_at_1_diff1
value: 15.241311807512584
- type: nauc_ndcg_at_1_max
value: -41.98413041761103
- type: nauc_ndcg_at_1_std
value: -1.7966111564973624
- type: nauc_ndcg_at_20_diff1
value: -5.70487127711277
- type: nauc_ndcg_at_20_max
value: -43.296928773082485
- type: nauc_ndcg_at_20_std
value: -4.953768651191041
- type: nauc_ndcg_at_3_diff1
value: 10.059341497787937
- type: nauc_ndcg_at_3_max
value: -40.68501908879975
- type: nauc_ndcg_at_3_std
value: -3.6931074797187877
- type: nauc_ndcg_at_5_diff1
value: 7.526983752941929
- type: nauc_ndcg_at_5_max
value: -43.365397576700275
- type: nauc_ndcg_at_5_std
value: 0.32616836825174683
- type: nauc_precision_at_1000_diff1
value: -7.438317571660842
- type: nauc_precision_at_1000_max
value: 34.73241001748508
- type: nauc_precision_at_1000_std
value: 36.25365158109604
- type: nauc_precision_at_100_diff1
value: -4.627005077446657
- type: nauc_precision_at_100_max
value: -15.93628289282409
- type: nauc_precision_at_100_std
value: 68.61386525027707
- type: nauc_precision_at_10_diff1
value: -10.52039936457346
- type: nauc_precision_at_10_max
value: -43.34615042118174
- type: nauc_precision_at_10_std
value: 9.318534549691767
- type: nauc_precision_at_1_diff1
value: 21.763874961879942
- type: nauc_precision_at_1_max
value: -42.10185605661237
- type: nauc_precision_at_1_std
value: -6.492292167140558
- type: nauc_precision_at_20_diff1
value: -2.287812706503246
- type: nauc_precision_at_20_max
value: -28.10959274429549
- type: nauc_precision_at_20_std
value: 16.788667831779485
- type: nauc_precision_at_3_diff1
value: 11.569650243424755
- type: nauc_precision_at_3_max
value: -41.668998559185844
- type: nauc_precision_at_3_std
value: -0.3803285872339615
- type: nauc_precision_at_5_diff1
value: 7.598490650206377
- type: nauc_precision_at_5_max
value: -41.68148813885381
- type: nauc_precision_at_5_std
value: 7.354258555131649
- type: nauc_recall_at_1000_diff1
value: -50.220542196994636
- type: nauc_recall_at_1000_max
value: -16.95193388500635
- type: nauc_recall_at_1000_std
value: 69.28134193017735
- type: nauc_recall_at_100_diff1
value: -15.415419361213853
- type: nauc_recall_at_100_max
value: -33.60910097372997
- type: nauc_recall_at_100_std
value: 35.403748730364256
- type: nauc_recall_at_10_diff1
value: -14.144822663337028
- type: nauc_recall_at_10_max
value: -38.11986778901871
- type: nauc_recall_at_10_std
value: -13.87707926888663
- type: nauc_recall_at_1_diff1
value: 8.542642521750217
- type: nauc_recall_at_1_max
value: -42.118453460843014
- type: nauc_recall_at_1_std
value: -21.4477651608444
- type: nauc_recall_at_20_diff1
value: -12.3394417307943
- type: nauc_recall_at_20_max
value: -32.75019884128939
- type: nauc_recall_at_20_std
value: -6.875770812126497
- type: nauc_recall_at_3_diff1
value: -0.907011119452535
- type: nauc_recall_at_3_max
value: -42.06461204250678
- type: nauc_recall_at_3_std
value: -18.765470997666945
- type: nauc_recall_at_5_diff1
value: -1.063588562013453
- type: nauc_recall_at_5_max
value: -39.15779594344513
- type: nauc_recall_at_5_std
value: -14.839683507905466
- type: ndcg_at_1
value: 29.592000000000002
- type: ndcg_at_10
value: 28.907
- type: ndcg_at_100
value: 40.211000000000006
- type: ndcg_at_1000
value: 51.482000000000006
- type: ndcg_at_20
value: 29.804000000000002
- type: ndcg_at_3
value: 30.802000000000003
- type: ndcg_at_5
value: 29.511
- type: precision_at_1
value: 32.653
- type: precision_at_10
value: 26.531
- type: precision_at_100
value: 8.224
- type: precision_at_1000
value: 1.576
- type: precision_at_20
value: 20.102
- type: precision_at_3
value: 34.014
- type: precision_at_5
value: 30.203999999999997
- type: recall_at_1
value: 2.675
- type: recall_at_10
value: 19.750999999999998
- type: recall_at_100
value: 50.365
- type: recall_at_1000
value: 84.773
- type: recall_at_20
value: 27.632
- type: recall_at_3
value: 7.578
- type: recall_at_5
value: 11.346
task:
type: Retrieval
- dataset:
config: default
name: MTEB ToxicConversationsClassification
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
split: test
type: mteb/toxic_conversations_50k
metrics:
- type: accuracy
value: 70.810546875
- type: ap
value: 14.252152092007437
- type: ap_weighted
value: 14.252152092007437
- type: f1
value: 54.48430687519361
- type: f1_weighted
value: 77.28107973539473
- type: main_score
value: 70.810546875
task:
type: Classification
- dataset:
config: default
name: MTEB TweetSentimentExtractionClassification
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
split: test
type: mteb/tweet_sentiment_extraction
metrics:
- type: accuracy
value: 62.66553480475382
- type: f1
value: 62.053566222838384
- type: f1_weighted
value: 60.48069640139468
- type: main_score
value: 62.66553480475382
task:
type: Classification
- dataset:
config: default
name: MTEB TwentyNewsgroupsClustering
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
split: test
type: mteb/twentynewsgroups-clustering
metrics:
- type: main_score
value: 49.676842982432774
- type: v_measure
value: 49.676842982432774
- type: v_measure_std
value: 1.3041225457855343
task:
type: Clustering
- dataset:
config: default
name: MTEB TwitterSemEval2015
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
split: test
type: mteb/twittersemeval2015-pairclassification
metrics:
- type: cosine_accuracy
value: 85.07480479227513
- type: cosine_accuracy_threshold
value: 78.39158177375793
- type: cosine_ap
value: 70.92737526837412
- type: cosine_f1
value: 66.1954959271682
- type: cosine_f1_threshold
value: 74.12481307983398
- type: cosine_precision
value: 60.61869240895129
- type: cosine_recall
value: 72.9023746701847
- type: dot_accuracy
value: 85.07480479227513
- type: dot_accuracy_threshold
value: 78.39158773422241
- type: dot_ap
value: 70.92737601494514
- type: dot_f1
value: 66.1954959271682
- type: dot_f1_threshold
value: 74.12482500076294
- type: dot_precision
value: 60.61869240895129
- type: dot_recall
value: 72.9023746701847
- type: euclidean_accuracy
value: 85.07480479227513
- type: euclidean_accuracy_threshold
value: 65.73951244354248
- type: euclidean_ap
value: 70.92738137519932
- type: euclidean_f1
value: 66.1954959271682
- type: euclidean_f1_threshold
value: 71.93772792816162
- type: euclidean_precision
value: 60.61869240895129
- type: euclidean_recall
value: 72.9023746701847
- type: main_score
value: 70.92738137519932
- type: manhattan_accuracy
value: 84.89002801454372
- type: manhattan_accuracy_threshold
value: 1543.7227249145508
- type: manhattan_ap
value: 70.45819704836475
- type: manhattan_f1
value: 65.75607397558322
- type: manhattan_f1_threshold
value: 1691.067886352539
- type: manhattan_precision
value: 60.673656033905864
- type: manhattan_recall
value: 71.76781002638522
- type: max_ap
value: 70.92738137519932
- type: max_f1
value: 66.1954959271682
- type: max_precision
value: 60.673656033905864
- type: max_recall
value: 72.9023746701847
- type: similarity_accuracy
value: 85.07480479227513
- type: similarity_accuracy_threshold
value: 78.39158177375793
- type: similarity_ap
value: 70.92737526837412
- type: similarity_f1
value: 66.1954959271682
- type: similarity_f1_threshold
value: 74.12481307983398
- type: similarity_precision
value: 60.61869240895129
- type: similarity_recall
value: 72.9023746701847
task:
type: PairClassification
- dataset:
config: default
name: MTEB TwitterURLCorpus
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
split: test
type: mteb/twitterurlcorpus-pairclassification
metrics:
- type: cosine_accuracy
value: 89.32355338223309
- type: cosine_accuracy_threshold
value: 72.50972986221313
- type: cosine_ap
value: 86.74895762701595
- type: cosine_f1
value: 79.21738810635873
- type: cosine_f1_threshold
value: 69.94493007659912
- type: cosine_precision
value: 75.82905020066183
- type: cosine_recall
value: 82.9226978749615
- type: dot_accuracy
value: 89.32355338223309
- type: dot_accuracy_threshold
value: 72.50974178314209
- type: dot_ap
value: 86.74894970312789
- type: dot_f1
value: 79.21738810635873
- type: dot_f1_threshold
value: 69.94493007659912
- type: dot_precision
value: 75.82905020066183
- type: dot_recall
value: 82.9226978749615
- type: euclidean_accuracy
value: 89.32355338223309
- type: euclidean_accuracy_threshold
value: 74.14885759353638
- type: euclidean_ap
value: 86.74893799074754
- type: euclidean_f1
value: 79.21738810635873
- type: euclidean_f1_threshold
value: 77.53072381019592
- type: euclidean_precision
value: 75.82905020066183
- type: euclidean_recall
value: 82.9226978749615
- type: main_score
value: 86.74895762701595
- type: manhattan_accuracy
value: 89.28474405247022
- type: manhattan_accuracy_threshold
value: 1725.102424621582
- type: manhattan_ap
value: 86.69699016049593
- type: manhattan_f1
value: 79.00847425990219
- type: manhattan_f1_threshold
value: 1807.0615768432617
- type: manhattan_precision
value: 76.68671642872673
- type: manhattan_recall
value: 81.4752078842008
- type: max_ap
value: 86.74895762701595
- type: max_f1
value: 79.21738810635873
- type: max_precision
value: 76.68671642872673
- type: max_recall
value: 82.9226978749615
- type: similarity_accuracy
value: 89.32355338223309
- type: similarity_accuracy_threshold
value: 72.50972986221313
- type: similarity_ap
value: 86.74895762701595
- type: similarity_f1
value: 79.21738810635873
- type: similarity_f1_threshold
value: 69.94493007659912
- type: similarity_precision
value: 75.82905020066183
- type: similarity_recall
value: 82.9226978749615
task:
type: PairClassification
- dataset:
config: default
name: MTEB AFQMC
revision: b44c3b011063adb25877c13823db83bb193913c4
split: validation
type: C-MTEB/AFQMC
metrics:
- type: cosine_pearson
value: 38.29145368837485
- type: cosine_spearman
value: 39.41056570139273
- type: euclidean_pearson
value: 38.0651461534699
- type: euclidean_spearman
value: 39.41056569992215
- type: main_score
value: 39.41056570139273
- type: manhattan_pearson
value: 37.70876309636298
- type: manhattan_spearman
value: 39.04864822187025
- type: pearson
value: 38.29145368837485
- type: spearman
value: 39.41056570139273
task:
type: STS
- dataset:
config: default
name: MTEB ATEC
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
split: test
type: C-MTEB/ATEC
metrics:
- type: cosine_pearson
value: 46.47704725371303
- type: cosine_spearman
value: 46.9183608596495
- type: euclidean_pearson
value: 49.36420417260176
- type: euclidean_spearman
value: 46.91835860770197
- type: main_score
value: 46.9183608596495
- type: manhattan_pearson
value: 49.124318954541145
- type: manhattan_spearman
value: 46.69432997494852
- type: pearson
value: 46.47704725371303
- type: spearman
value: 46.9183608596495
task:
type: STS
- dataset:
config: zh
name: MTEB AmazonReviewsClassification (zh)
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
split: test
type: mteb/amazon_reviews_multi
metrics:
- type: accuracy
value: 41.858000000000004
- type: f1
value: 38.04731113109237
- type: f1_weighted
value: 38.04731113109237
- type: main_score
value: 41.858000000000004
task:
type: Classification
- dataset:
config: default
name: MTEB BQ
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
split: test
type: C-MTEB/BQ
metrics:
- type: cosine_pearson
value: 51.2270285721989
- type: cosine_spearman
value: 51.53381532349815
- type: euclidean_pearson
value: 50.83672339980501
- type: euclidean_spearman
value: 51.53382225123762
- type: main_score
value: 51.53381532349815
- type: manhattan_pearson
value: 50.481897254555655
- type: manhattan_spearman
value: 51.165938122581764
- type: pearson
value: 51.2270285721989
- type: spearman
value: 51.53381532349815
task:
type: STS
- dataset:
config: default
name: MTEB CLSClusteringP2P
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
split: test
type: C-MTEB/CLSClusteringP2P
metrics:
- type: main_score
value: 42.6351765343486
- type: v_measure
value: 42.6351765343486
- type: v_measure_std
value: 0.8266776246358534
task:
type: Clustering
- dataset:
config: default
name: MTEB CLSClusteringS2S
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
split: test
type: C-MTEB/CLSClusteringS2S
metrics:
- type: main_score
value: 39.14026434895999
- type: v_measure
value: 39.14026434895999
- type: v_measure_std
value: 0.8843326244130124
task:
type: Clustering
- dataset:
config: default
name: MTEB CMedQAv1
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
split: test
type: C-MTEB/CMedQAv1-reranking
metrics:
- type: main_score
value: 81.62649518330059
- type: map
value: 81.62649518330059
- type: mrr
value: 84.59920634920634
- type: nAUC_map_diff1
value: 57.57622865226385
- type: nAUC_map_max
value: 64.24578070815535
- type: nAUC_map_std
value: 25.825835637398292
- type: nAUC_mrr_diff1
value: 64.506555321586
- type: nAUC_mrr_max
value: 73.72849839805279
- type: nAUC_mrr_std
value: 33.50231715071016
task:
type: Reranking
- dataset:
config: default
name: MTEB CMedQAv2
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
split: test
type: C-MTEB/CMedQAv2-reranking
metrics:
- type: main_score
value: 82.6884842555647
- type: map
value: 82.6884842555647
- type: mrr
value: 85.7413492063492
- type: nAUC_map_diff1
value: 62.227875149480674
- type: nAUC_map_max
value: 65.39899447833739
- type: nAUC_map_std
value: 22.232770911289762
- type: nAUC_mrr_diff1
value: 71.02339957841794
- type: nAUC_mrr_max
value: 75.79106833222022
- type: nAUC_mrr_std
value: 31.922312297325313
task:
type: Reranking
- dataset:
config: default
name: MTEB CmedqaRetrieval
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
split: dev
type: C-MTEB/CmedqaRetrieval
metrics:
- type: main_score
value: 41.912
- type: map_at_1
value: 24.154
- type: map_at_10
value: 35.771
- type: map_at_100
value: 37.361
- type: map_at_1000
value: 37.501
- type: map_at_20
value: 36.614000000000004
- type: map_at_3
value: 32.208999999999996
- type: map_at_5
value: 34.135
- type: mrr_at_1
value: 36.959239809952486
- type: mrr_at_10
value: 44.68076344482939
- type: mrr_at_100
value: 45.58051326135588
- type: mrr_at_1000
value: 45.63875894256334
- type: mrr_at_20
value: 45.18303299514746
- type: mrr_at_3
value: 42.55230474285231
- type: mrr_at_5
value: 43.73134950404267
- type: nauc_map_at_1000_diff1
value: 48.19593787339997
- type: nauc_map_at_1000_max
value: 45.80793623720016
- type: nauc_map_at_1000_std
value: -4.498738770651924
- type: nauc_map_at_100_diff1
value: 48.14822061537294
- type: nauc_map_at_100_max
value: 45.766276109027565
- type: nauc_map_at_100_std
value: -4.531921171029137
- type: nauc_map_at_10_diff1
value: 48.056275142802576
- type: nauc_map_at_10_max
value: 44.86133659352232
- type: nauc_map_at_10_std
value: -5.678734969973419
- type: nauc_map_at_1_diff1
value: 54.126770601702304
- type: nauc_map_at_1_max
value: 36.294268209121014
- type: nauc_map_at_1_std
value: -8.314309694617984
- type: nauc_map_at_20_diff1
value: 48.040597097872464
- type: nauc_map_at_20_max
value: 45.361480980577554
- type: nauc_map_at_20_std
value: -5.1056219220416414
- type: nauc_map_at_3_diff1
value: 48.824963816099306
- type: nauc_map_at_3_max
value: 42.59637253351721
- type: nauc_map_at_3_std
value: -7.142494643007989
- type: nauc_map_at_5_diff1
value: 48.39295465854973
- type: nauc_map_at_5_max
value: 43.81282348287875
- type: nauc_map_at_5_std
value: -6.551989013310646
- type: nauc_mrr_at_1000_diff1
value: 55.254016903996884
- type: nauc_mrr_at_1000_max
value: 53.09878029734
- type: nauc_mrr_at_1000_std
value: -0.71508532680536
- type: nauc_mrr_at_100_diff1
value: 55.22345420339283
- type: nauc_mrr_at_100_max
value: 53.09592092707568
- type: nauc_mrr_at_100_std
value: -0.6931227079570508
- type: nauc_mrr_at_10_diff1
value: 55.18285620712305
- type: nauc_mrr_at_10_max
value: 53.0128131412299
- type: nauc_mrr_at_10_std
value: -0.9419014092991297
- type: nauc_mrr_at_1_diff1
value: 61.53750424643732
- type: nauc_mrr_at_1_max
value: 54.24674408902589
- type: nauc_mrr_at_1_std
value: -1.9080737950338242
- type: nauc_mrr_at_20_diff1
value: 55.1955850013467
- type: nauc_mrr_at_20_max
value: 53.04094140836042
- type: nauc_mrr_at_20_std
value: -0.8063521557954811
- type: nauc_mrr_at_3_diff1
value: 56.11946877115898
- type: nauc_mrr_at_3_max
value: 53.46308123387505
- type: nauc_mrr_at_3_std
value: -1.25039802843073
- type: nauc_mrr_at_5_diff1
value: 55.59945526594265
- type: nauc_mrr_at_5_max
value: 53.094458463158546
- type: nauc_mrr_at_5_std
value: -1.1485696186251675
- type: nauc_ndcg_at_1000_diff1
value: 48.630394030057936
- type: nauc_ndcg_at_1000_max
value: 49.067370003850804
- type: nauc_ndcg_at_1000_std
value: -0.6379826555665533
- type: nauc_ndcg_at_100_diff1
value: 47.4242704726565
- type: nauc_ndcg_at_100_max
value: 48.72472432340327
- type: nauc_ndcg_at_100_std
value: -0.16567922191922693
- type: nauc_ndcg_at_10_diff1
value: 47.16820763109196
- type: nauc_ndcg_at_10_max
value: 46.69185085844686
- type: nauc_ndcg_at_10_std
value: -3.793946471519526
- type: nauc_ndcg_at_1_diff1
value: 61.53750424643732
- type: nauc_ndcg_at_1_max
value: 54.24674408902589
- type: nauc_ndcg_at_1_std
value: -1.9080737950338242
- type: nauc_ndcg_at_20_diff1
value: 47.062085251805165
- type: nauc_ndcg_at_20_max
value: 47.36804459443504
- type: nauc_ndcg_at_20_std
value: -2.6790807434003154
- type: nauc_ndcg_at_3_diff1
value: 49.37353194021333
- type: nauc_ndcg_at_3_max
value: 48.35156335077874
- type: nauc_ndcg_at_3_std
value: -3.3398102492848656
- type: nauc_ndcg_at_5_diff1
value: 48.0947159130794
- type: nauc_ndcg_at_5_max
value: 46.680994331148504
- type: nauc_ndcg_at_5_std
value: -4.043874632127286
- type: nauc_precision_at_1000_diff1
value: 6.109079873705322
- type: nauc_precision_at_1000_max
value: 29.504954981504778
- type: nauc_precision_at_1000_std
value: 22.93941750032271
- type: nauc_precision_at_100_diff1
value: 11.927597721886762
- type: nauc_precision_at_100_max
value: 39.33748646673334
- type: nauc_precision_at_100_std
value: 23.95901745749321
- type: nauc_precision_at_10_diff1
value: 24.82917619008383
- type: nauc_precision_at_10_max
value: 48.25909614877216
- type: nauc_precision_at_10_std
value: 10.250143723179713
- type: nauc_precision_at_1_diff1
value: 61.53750424643732
- type: nauc_precision_at_1_max
value: 54.24674408902589
- type: nauc_precision_at_1_std
value: -1.9080737950338242
- type: nauc_precision_at_20_diff1
value: 20.46788631872044
- type: nauc_precision_at_20_max
value: 45.80722239546835
- type: nauc_precision_at_20_std
value: 14.720113784118633
- type: nauc_precision_at_3_diff1
value: 36.57074097596536
- type: nauc_precision_at_3_max
value: 52.82030883151323
- type: nauc_precision_at_3_std
value: 3.9283920700632526
- type: nauc_precision_at_5_diff1
value: 31.217047808074472
- type: nauc_precision_at_5_max
value: 51.092762871371654
- type: nauc_precision_at_5_std
value: 6.51063180919143
- type: nauc_recall_at_1000_diff1
value: 31.30321342816756
- type: nauc_recall_at_1000_max
value: 55.469754854393486
- type: nauc_recall_at_1000_std
value: 46.627360786810655
- type: nauc_recall_at_100_diff1
value: 26.36814612505595
- type: nauc_recall_at_100_max
value: 41.98698104560196
- type: nauc_recall_at_100_std
value: 16.01155635795268
- type: nauc_recall_at_10_diff1
value: 34.230500025598566
- type: nauc_recall_at_10_max
value: 38.46622774541338
- type: nauc_recall_at_10_std
value: -3.5976451821598636
- type: nauc_recall_at_1_diff1
value: 54.126770601702304
- type: nauc_recall_at_1_max
value: 36.294268209121014
- type: nauc_recall_at_1_std
value: -8.314309694617984
- type: nauc_recall_at_20_diff1
value: 31.92600233159853
- type: nauc_recall_at_20_max
value: 39.151276414762634
- type: nauc_recall_at_20_std
value: -0.008185757782290744
- type: nauc_recall_at_3_diff1
value: 40.983135298326175
- type: nauc_recall_at_3_max
value: 39.282144240448105
- type: nauc_recall_at_3_std
value: -6.478558331383442
- type: nauc_recall_at_5_diff1
value: 37.96561121548906
- type: nauc_recall_at_5_max
value: 38.25573176800016
- type: nauc_recall_at_5_std
value: -5.896110553981627
- type: ndcg_at_1
value: 36.958999999999996
- type: ndcg_at_10
value: 41.912
- type: ndcg_at_100
value: 48.412
- type: ndcg_at_1000
value: 51.076
- type: ndcg_at_20
value: 44.237
- type: ndcg_at_3
value: 37.596000000000004
- type: ndcg_at_5
value: 39.257
- type: precision_at_1
value: 36.958999999999996
- type: precision_at_10
value: 9.222
- type: precision_at_100
value: 1.456
- type: precision_at_1000
value: 0.18
- type: precision_at_20
value: 5.404
- type: precision_at_3
value: 21.346999999999998
- type: precision_at_5
value: 15.204
- type: recall_at_1
value: 24.154
- type: recall_at_10
value: 51.13799999999999
- type: recall_at_100
value: 78.44200000000001
- type: recall_at_1000
value: 96.607
- type: recall_at_20
value: 59.01499999999999
- type: recall_at_3
value: 37.645
- type: recall_at_5
value: 43.24
task:
type: Retrieval
- dataset:
config: default
name: MTEB Cmnli
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
split: validation
type: C-MTEB/CMNLI
metrics:
- type: cosine_accuracy
value: 70.24654239326519
- type: cosine_accuracy_threshold
value: 65.65687656402588
- type: cosine_ap
value: 76.97337656087815
- type: cosine_f1
value: 72.89293849658314
- type: cosine_f1_threshold
value: 58.187782764434814
- type: cosine_precision
value: 63.230240549828174
- type: cosine_recall
value: 86.04161795651157
- type: dot_accuracy
value: 70.24654239326519
- type: dot_accuracy_threshold
value: 65.65687656402588
- type: dot_ap
value: 76.99306253217402
- type: dot_f1
value: 72.89293849658314
- type: dot_f1_threshold
value: 58.18778872489929
- type: dot_precision
value: 63.230240549828174
- type: dot_recall
value: 86.04161795651157
- type: euclidean_accuracy
value: 70.24654239326519
- type: euclidean_accuracy_threshold
value: 82.8771710395813
- type: euclidean_ap
value: 76.97337656087815
- type: euclidean_f1
value: 72.89293849658314
- type: euclidean_f1_threshold
value: 91.44638776779175
- type: euclidean_precision
value: 63.230240549828174
- type: euclidean_recall
value: 86.04161795651157
- type: main_score
value: 76.99306253217402
- type: manhattan_accuracy
value: 69.74143114852676
- type: manhattan_accuracy_threshold
value: 1963.1107330322266
- type: manhattan_ap
value: 76.44289061856252
- type: manhattan_f1
value: 72.70526528142021
- type: manhattan_f1_threshold
value: 2121.240234375
- type: manhattan_precision
value: 63.93471704807522
- type: manhattan_recall
value: 84.26467149871405
- type: max_ap
value: 76.99306253217402
- type: max_f1
value: 72.89293849658314
- type: max_precision
value: 63.93471704807522
- type: max_recall
value: 86.04161795651157
- type: similarity_accuracy
value: 70.24654239326519
- type: similarity_accuracy_threshold
value: 65.65687656402588
- type: similarity_ap
value: 76.97337656087815
- type: similarity_f1
value: 72.89293849658314
- type: similarity_f1_threshold
value: 58.187782764434814
- type: similarity_precision
value: 63.230240549828174
- type: similarity_recall
value: 86.04161795651157
task:
type: PairClassification
- dataset:
config: default
name: MTEB CovidRetrieval
revision: 1271c7809071a13532e05f25fb53511ffce77117
split: dev
type: C-MTEB/CovidRetrieval
metrics:
- type: main_score
value: 82.09100000000001
- type: map_at_1
value: 69.679
- type: map_at_10
value: 78.188
- type: map_at_100
value: 78.432
- type: map_at_1000
value: 78.435
- type: map_at_20
value: 78.358
- type: map_at_3
value: 76.458
- type: map_at_5
value: 77.525
- type: mrr_at_1
value: 69.86301369863014
- type: mrr_at_10
value: 78.1891966481008
- type: mrr_at_100
value: 78.43100887014927
- type: mrr_at_1000
value: 78.43409905944281
- type: mrr_at_20
value: 78.3583713625236
- type: mrr_at_3
value: 76.5015806111697
- type: mrr_at_5
value: 77.5816649104321
- type: nauc_map_at_1000_diff1
value: 78.7565094457952
- type: nauc_map_at_1000_max
value: 43.44153271106606
- type: nauc_map_at_1000_std
value: -43.35643127411659
- type: nauc_map_at_100_diff1
value: 78.75464512949722
- type: nauc_map_at_100_max
value: 43.44614729899657
- type: nauc_map_at_100_std
value: -43.35662894001264
- type: nauc_map_at_10_diff1
value: 78.6150484744859
- type: nauc_map_at_10_max
value: 43.22212591985456
- type: nauc_map_at_10_std
value: -43.68204084683379
- type: nauc_map_at_1_diff1
value: 81.86147718901591
- type: nauc_map_at_1_max
value: 43.27595769557031
- type: nauc_map_at_1_std
value: -40.832434398434316
- type: nauc_map_at_20_diff1
value: 78.72313916367459
- type: nauc_map_at_20_max
value: 43.527065459801754
- type: nauc_map_at_20_std
value: -43.299315170766626
- type: nauc_map_at_3_diff1
value: 78.6799910684285
- type: nauc_map_at_3_max
value: 42.319407684110274
- type: nauc_map_at_3_std
value: -45.537423149362695
- type: nauc_map_at_5_diff1
value: 78.25825961555257
- type: nauc_map_at_5_max
value: 42.66902641451189
- type: nauc_map_at_5_std
value: -44.2482231636208
- type: nauc_mrr_at_1000_diff1
value: 78.77840881732628
- type: nauc_mrr_at_1000_max
value: 43.75052183199315
- type: nauc_mrr_at_1000_std
value: -42.89324434781183
- type: nauc_mrr_at_100_diff1
value: 78.7765411998645
- type: nauc_mrr_at_100_max
value: 43.755086077231056
- type: nauc_mrr_at_100_std
value: -42.89351661301109
- type: nauc_mrr_at_10_diff1
value: 78.63610310385711
- type: nauc_mrr_at_10_max
value: 43.52324483162967
- type: nauc_mrr_at_10_std
value: -43.23477882995708
- type: nauc_mrr_at_1_diff1
value: 81.65699303519479
- type: nauc_mrr_at_1_max
value: 44.202391758796914
- type: nauc_mrr_at_1_std
value: -39.36327383599781
- type: nauc_mrr_at_20_diff1
value: 78.7443733650774
- type: nauc_mrr_at_20_max
value: 43.83081490577578
- type: nauc_mrr_at_20_std
value: -42.848142406550764
- type: nauc_mrr_at_3_diff1
value: 78.64356391070008
- type: nauc_mrr_at_3_max
value: 42.76861798176099
- type: nauc_mrr_at_3_std
value: -44.84496156914284
- type: nauc_mrr_at_5_diff1
value: 78.22192606452634
- type: nauc_mrr_at_5_max
value: 43.12757659228294
- type: nauc_mrr_at_5_std
value: -43.471573840955344
- type: nauc_ndcg_at_1000_diff1
value: 78.1838616987732
- type: nauc_ndcg_at_1000_max
value: 43.859382162396884
- type: nauc_ndcg_at_1000_std
value: -43.30653697283926
- type: nauc_ndcg_at_100_diff1
value: 78.13119295479274
- type: nauc_ndcg_at_100_max
value: 44.01086911321529
- type: nauc_ndcg_at_100_std
value: -43.24874302093996
- type: nauc_ndcg_at_10_diff1
value: 77.48152464096923
- type: nauc_ndcg_at_10_max
value: 43.264264169510504
- type: nauc_ndcg_at_10_std
value: -44.580175112852835
- type: nauc_ndcg_at_1_diff1
value: 81.43455985468403
- type: nauc_ndcg_at_1_max
value: 44.252000550874484
- type: nauc_ndcg_at_1_std
value: -39.38237995087698
- type: nauc_ndcg_at_20_diff1
value: 77.85410963490207
- type: nauc_ndcg_at_20_max
value: 44.68578065287876
- type: nauc_ndcg_at_20_std
value: -42.87046493321746
- type: nauc_ndcg_at_3_diff1
value: 77.55400028908774
- type: nauc_ndcg_at_3_max
value: 41.47690499246867
- type: nauc_ndcg_at_3_std
value: -47.96239510251043
- type: nauc_ndcg_at_5_diff1
value: 76.55817027861454
- type: nauc_ndcg_at_5_max
value: 42.01696124525059
- type: nauc_ndcg_at_5_std
value: -45.6385058409844
- type: nauc_precision_at_1000_diff1
value: -28.009627138628257
- type: nauc_precision_at_1000_max
value: 29.24459991455739
- type: nauc_precision_at_1000_std
value: 58.852174419737146
- type: nauc_precision_at_100_diff1
value: -6.814208555904227
- type: nauc_precision_at_100_max
value: 38.58450802218331
- type: nauc_precision_at_100_std
value: 39.48885778925581
- type: nauc_precision_at_10_diff1
value: 42.69404009383913
- type: nauc_precision_at_10_max
value: 39.72607044424161
- type: nauc_precision_at_10_std
value: -22.31713351851116
- type: nauc_precision_at_1_diff1
value: 81.43455985468403
- type: nauc_precision_at_1_max
value: 44.252000550874484
- type: nauc_precision_at_1_std
value: -39.38237995087698
- type: nauc_precision_at_20_diff1
value: 31.218498932644845
- type: nauc_precision_at_20_max
value: 55.11413173622635
- type: nauc_precision_at_20_std
value: 7.702910966907561
- type: nauc_precision_at_3_diff1
value: 67.07260136293569
- type: nauc_precision_at_3_max
value: 37.464338835123904
- type: nauc_precision_at_3_std
value: -51.72773522807322
- type: nauc_precision_at_5_diff1
value: 57.11817879149
- type: nauc_precision_at_5_max
value: 37.78607913838418
- type: nauc_precision_at_5_std
value: -41.3489934177573
- type: nauc_recall_at_1000_diff1
value: 58.37811197433529
- type: nauc_recall_at_1000_max
value: 77.70125019980898
- type: nauc_recall_at_1000_std
value: -7.415635097287519
- type: nauc_recall_at_100_diff1
value: 64.57899134001917
- type: nauc_recall_at_100_max
value: 74.20013410570942
- type: nauc_recall_at_100_std
value: -20.672136729747088
- type: nauc_recall_at_10_diff1
value: 67.93094200727559
- type: nauc_recall_at_10_max
value: 43.42164333462216
- type: nauc_recall_at_10_std
value: -53.33541950399078
- type: nauc_recall_at_1_diff1
value: 81.86147718901591
- type: nauc_recall_at_1_max
value: 43.27595769557031
- type: nauc_recall_at_1_std
value: -40.832434398434316
- type: nauc_recall_at_20_diff1
value: 67.50567004840833
- type: nauc_recall_at_20_max
value: 68.28046074793383
- type: nauc_recall_at_20_std
value: -29.574869314866653
- type: nauc_recall_at_3_diff1
value: 73.0577497285433
- type: nauc_recall_at_3_max
value: 36.948110275313425
- type: nauc_recall_at_3_std
value: -59.30189498397615
- type: nauc_recall_at_5_diff1
value: 66.98956370201739
- type: nauc_recall_at_5_max
value: 37.16579792310329
- type: nauc_recall_at_5_std
value: -54.60597345402122
- type: ndcg_at_1
value: 69.968
- type: ndcg_at_10
value: 82.09100000000001
- type: ndcg_at_100
value: 83.177
- type: ndcg_at_1000
value: 83.258
- type: ndcg_at_20
value: 82.68799999999999
- type: ndcg_at_3
value: 78.666
- type: ndcg_at_5
value: 80.613
- type: precision_at_1
value: 69.968
- type: precision_at_10
value: 9.504999999999999
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.101
- type: precision_at_20
value: 4.868
- type: precision_at_3
value: 28.486
- type: precision_at_5
value: 18.082
- type: recall_at_1
value: 69.679
- type: recall_at_10
value: 94.099
- type: recall_at_100
value: 98.946
- type: recall_at_1000
value: 99.579
- type: recall_at_20
value: 96.417
- type: recall_at_3
value: 84.958
- type: recall_at_5
value: 89.726
task:
type: Retrieval
- dataset:
config: default
name: MTEB DuRetrieval
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
split: dev
type: C-MTEB/DuRetrieval
metrics:
- type: main_score
value: 82.093
- type: map_at_1
value: 23.294
- type: map_at_10
value: 73.087
- type: map_at_100
value: 76.378
- type: map_at_1000
value: 76.429
- type: map_at_20
value: 75.645
- type: map_at_3
value: 49.49
- type: map_at_5
value: 62.79900000000001
- type: mrr_at_1
value: 82.65
- type: mrr_at_10
value: 88.56652777777776
- type: mrr_at_100
value: 88.65106019759902
- type: mrr_at_1000
value: 88.65548524359767
- type: mrr_at_20
value: 88.62234385196844
- type: mrr_at_3
value: 88.0333333333333
- type: mrr_at_5
value: 88.43083333333333
- type: nauc_map_at_1000_diff1
value: 3.017682068149073
- type: nauc_map_at_1000_max
value: 43.31894144534087
- type: nauc_map_at_1000_std
value: 14.103477261758462
- type: nauc_map_at_100_diff1
value: 3.01786018428549
- type: nauc_map_at_100_max
value: 43.304578781010584
- type: nauc_map_at_100_std
value: 14.104821995278524
- type: nauc_map_at_10_diff1
value: 6.00776493567358
- type: nauc_map_at_10_max
value: 40.050232117264265
- type: nauc_map_at_10_std
value: 3.8907867883058964
- type: nauc_map_at_1_diff1
value: 40.656271709573616
- type: nauc_map_at_1_max
value: -6.665245760519005
- type: nauc_map_at_1_std
value: -29.384443787821894
- type: nauc_map_at_20_diff1
value: 3.462215302112235
- type: nauc_map_at_20_max
value: 42.97592478608055
- type: nauc_map_at_20_std
value: 11.923153462330815
- type: nauc_map_at_3_diff1
value: 24.857326825495797
- type: nauc_map_at_3_max
value: 7.79715123136744
- type: nauc_map_at_3_std
value: -24.158608608669
- type: nauc_map_at_5_diff1
value: 16.134527943963175
- type: nauc_map_at_5_max
value: 21.945455683828534
- type: nauc_map_at_5_std
value: -15.417311822489824
- type: nauc_mrr_at_1000_diff1
value: 22.608720258580345
- type: nauc_mrr_at_1000_max
value: 57.14809743855488
- type: nauc_mrr_at_1000_std
value: 26.500042115342154
- type: nauc_mrr_at_100_diff1
value: 22.60822245173703
- type: nauc_mrr_at_100_max
value: 57.16085387711407
- type: nauc_mrr_at_100_std
value: 26.52114951859548
- type: nauc_mrr_at_10_diff1
value: 22.698266613067958
- type: nauc_mrr_at_10_max
value: 57.405277806586454
- type: nauc_mrr_at_10_std
value: 26.753463349560942
- type: nauc_mrr_at_1_diff1
value: 25.116149229327394
- type: nauc_mrr_at_1_max
value: 50.18786123051239
- type: nauc_mrr_at_1_std
value: 17.896523926314035
- type: nauc_mrr_at_20_diff1
value: 22.63109662240636
- type: nauc_mrr_at_20_max
value: 57.25789480886964
- type: nauc_mrr_at_20_std
value: 26.628848293894535
- type: nauc_mrr_at_3_diff1
value: 22.29030169026751
- type: nauc_mrr_at_3_max
value: 57.78690245871875
- type: nauc_mrr_at_3_std
value: 26.961874143079275
- type: nauc_mrr_at_5_diff1
value: 22.539256613417436
- type: nauc_mrr_at_5_max
value: 57.640952298152946
- type: nauc_mrr_at_5_std
value: 27.166131522241564
- type: nauc_ndcg_at_1000_diff1
value: 4.335459030896887
- type: nauc_ndcg_at_1000_max
value: 51.40790109857344
- type: nauc_ndcg_at_1000_std
value: 25.223663033428558
- type: nauc_ndcg_at_100_diff1
value: 3.756968920629851
- type: nauc_ndcg_at_100_max
value: 51.23131481991569
- type: nauc_ndcg_at_100_std
value: 25.896007604039635
- type: nauc_ndcg_at_10_diff1
value: 3.7299699790096703
- type: nauc_ndcg_at_10_max
value: 47.98647382256022
- type: nauc_ndcg_at_10_std
value: 17.025514680687277
- type: nauc_ndcg_at_1_diff1
value: 25.116149229327394
- type: nauc_ndcg_at_1_max
value: 50.18786123051239
- type: nauc_ndcg_at_1_std
value: 17.896523926314035
- type: nauc_ndcg_at_20_diff1
value: 3.692033975506179
- type: nauc_ndcg_at_20_max
value: 50.70003527682141
- type: nauc_ndcg_at_20_std
value: 22.512279629260227
- type: nauc_ndcg_at_3_diff1
value: 5.101141943602369
- type: nauc_ndcg_at_3_max
value: 44.526033252737705
- type: nauc_ndcg_at_3_std
value: 17.21985170533644
- type: nauc_ndcg_at_5_diff1
value: 5.128269340707157
- type: nauc_ndcg_at_5_max
value: 40.74953442421861
- type: nauc_ndcg_at_5_std
value: 10.54615337986913
- type: nauc_precision_at_1000_diff1
value: -28.088666590713135
- type: nauc_precision_at_1000_max
value: 23.005522720304104
- type: nauc_precision_at_1000_std
value: 50.173926122648524
- type: nauc_precision_at_100_diff1
value: -28.968645059600682
- type: nauc_precision_at_100_max
value: 25.04622827770351
- type: nauc_precision_at_100_std
value: 52.230491589978115
- type: nauc_precision_at_10_diff1
value: -30.253268763729245
- type: nauc_precision_at_10_max
value: 38.44381775116214
- type: nauc_precision_at_10_std
value: 47.93579661356217
- type: nauc_precision_at_1_diff1
value: 25.116149229327394
- type: nauc_precision_at_1_max
value: 50.18786123051239
- type: nauc_precision_at_1_std
value: 17.896523926314035
- type: nauc_precision_at_20_diff1
value: -29.78333017605082
- type: nauc_precision_at_20_max
value: 30.724852767715742
- type: nauc_precision_at_20_std
value: 51.556480994031176
- type: nauc_precision_at_3_diff1
value: -19.839530913679052
- type: nauc_precision_at_3_max
value: 46.97201811029464
- type: nauc_precision_at_3_std
value: 32.763601276627426
- type: nauc_precision_at_5_diff1
value: -26.491574031749167
- type: nauc_precision_at_5_max
value: 43.298145808496955
- type: nauc_precision_at_5_std
value: 37.30863792820846
- type: nauc_recall_at_1000_diff1
value: -30.13364129325676
- type: nauc_recall_at_1000_max
value: 73.24128272106563
- type: nauc_recall_at_1000_std
value: 78.93831159982587
- type: nauc_recall_at_100_diff1
value: -18.765607920053267
- type: nauc_recall_at_100_max
value: 54.712120419339364
- type: nauc_recall_at_100_std
value: 57.767960027082566
- type: nauc_recall_at_10_diff1
value: -0.6052835404182173
- type: nauc_recall_at_10_max
value: 39.946898924388954
- type: nauc_recall_at_10_std
value: 4.709923580866511
- type: nauc_recall_at_1_diff1
value: 40.656271709573616
- type: nauc_recall_at_1_max
value: -6.665245760519005
- type: nauc_recall_at_1_std
value: -29.384443787821894
- type: nauc_recall_at_20_diff1
value: -5.962280989061532
- type: nauc_recall_at_20_max
value: 50.09170736630004
- type: nauc_recall_at_20_std
value: 29.458350383857574
- type: nauc_recall_at_3_diff1
value: 22.545894407841793
- type: nauc_recall_at_3_max
value: 2.6193977834875533
- type: nauc_recall_at_3_std
value: -26.87014769293195
- type: nauc_recall_at_5_diff1
value: 13.352272138382745
- type: nauc_recall_at_5_max
value: 14.75948274133919
- type: nauc_recall_at_5_std
value: -20.70760567642474
- type: ndcg_at_1
value: 82.65
- type: ndcg_at_10
value: 82.093
- type: ndcg_at_100
value: 85.75500000000001
- type: ndcg_at_1000
value: 86.247
- type: ndcg_at_20
value: 84.218
- type: ndcg_at_3
value: 79.259
- type: ndcg_at_5
value: 78.691
- type: precision_at_1
value: 82.65
- type: precision_at_10
value: 40.21
- type: precision_at_100
value: 4.761
- type: precision_at_1000
value: 0.488
- type: precision_at_20
value: 22.303
- type: precision_at_3
value: 71.48299999999999
- type: precision_at_5
value: 60.83
- type: recall_at_1
value: 23.294
- type: recall_at_10
value: 84.98599999999999
- type: recall_at_100
value: 96.441
- type: recall_at_1000
value: 99.005
- type: recall_at_20
value: 91.263
- type: recall_at_3
value: 52.888000000000005
- type: recall_at_5
value: 69.48100000000001
task:
type: Retrieval
- dataset:
config: default
name: MTEB EcomRetrieval
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
split: dev
type: C-MTEB/EcomRetrieval
metrics:
- type: main_score
value: 62.514
- type: map_at_1
value: 46.800000000000004
- type: map_at_10
value: 57.108000000000004
- type: map_at_100
value: 57.665
- type: map_at_1000
value: 57.68600000000001
- type: map_at_20
value: 57.469
- type: map_at_3
value: 54.167
- type: map_at_5
value: 56.077
- type: mrr_at_1
value: 46.800000000000004
- type: mrr_at_10
value: 57.10785714285714
- type: mrr_at_100
value: 57.66479182756831
- type: mrr_at_1000
value: 57.685955034269185
- type: mrr_at_20
value: 57.46916307505702
- type: mrr_at_3
value: 54.16666666666663
- type: mrr_at_5
value: 56.07666666666664
- type: nauc_map_at_1000_diff1
value: 61.542672828066
- type: nauc_map_at_1000_max
value: 31.85700200032805
- type: nauc_map_at_1000_std
value: -11.620181705591662
- type: nauc_map_at_100_diff1
value: 61.53813237491788
- type: nauc_map_at_100_max
value: 31.874036133018084
- type: nauc_map_at_100_std
value: -11.59724786321096
- type: nauc_map_at_10_diff1
value: 61.38313778334582
- type: nauc_map_at_10_max
value: 31.740467380708182
- type: nauc_map_at_10_std
value: -12.100842206709821
- type: nauc_map_at_1_diff1
value: 63.66949701943299
- type: nauc_map_at_1_max
value: 28.133811910672573
- type: nauc_map_at_1_std
value: -14.453510006377535
- type: nauc_map_at_20_diff1
value: 61.5638057215127
- type: nauc_map_at_20_max
value: 31.904214948036756
- type: nauc_map_at_20_std
value: -11.719473194737628
- type: nauc_map_at_3_diff1
value: 61.19354745729959
- type: nauc_map_at_3_max
value: 29.813217610060548
- type: nauc_map_at_3_std
value: -13.883839488771295
- type: nauc_map_at_5_diff1
value: 61.08733612041498
- type: nauc_map_at_5_max
value: 31.255100654464012
- type: nauc_map_at_5_std
value: -12.09065665533858
- type: nauc_mrr_at_1000_diff1
value: 61.542672828066
- type: nauc_mrr_at_1000_max
value: 31.85700200032805
- type: nauc_mrr_at_1000_std
value: -11.620181705591662
- type: nauc_mrr_at_100_diff1
value: 61.53813237491788
- type: nauc_mrr_at_100_max
value: 31.874036133018084
- type: nauc_mrr_at_100_std
value: -11.59724786321096
- type: nauc_mrr_at_10_diff1
value: 61.38313778334582
- type: nauc_mrr_at_10_max
value: 31.740467380708182
- type: nauc_mrr_at_10_std
value: -12.100842206709821
- type: nauc_mrr_at_1_diff1
value: 63.66949701943299
- type: nauc_mrr_at_1_max
value: 28.133811910672573
- type: nauc_mrr_at_1_std
value: -14.453510006377535
- type: nauc_mrr_at_20_diff1
value: 61.5638057215127
- type: nauc_mrr_at_20_max
value: 31.904214948036756
- type: nauc_mrr_at_20_std
value: -11.719473194737628
- type: nauc_mrr_at_3_diff1
value: 61.19354745729959
- type: nauc_mrr_at_3_max
value: 29.813217610060548
- type: nauc_mrr_at_3_std
value: -13.883839488771295
- type: nauc_mrr_at_5_diff1
value: 61.08733612041498
- type: nauc_mrr_at_5_max
value: 31.255100654464012
- type: nauc_mrr_at_5_std
value: -12.09065665533858
- type: nauc_ndcg_at_1000_diff1
value: 61.404354519031024
- type: nauc_ndcg_at_1000_max
value: 34.5568056709905
- type: nauc_ndcg_at_1000_std
value: -8.194258261068375
- type: nauc_ndcg_at_100_diff1
value: 61.31111013617605
- type: nauc_ndcg_at_100_max
value: 35.081274620942295
- type: nauc_ndcg_at_100_std
value: -7.567587216846379
- type: nauc_ndcg_at_10_diff1
value: 60.796642472721004
- type: nauc_ndcg_at_10_max
value: 34.413253540105245
- type: nauc_ndcg_at_10_std
value: -10.263251244353334
- type: nauc_ndcg_at_1_diff1
value: 63.66949701943299
- type: nauc_ndcg_at_1_max
value: 28.133811910672573
- type: nauc_ndcg_at_1_std
value: -14.453510006377535
- type: nauc_ndcg_at_20_diff1
value: 61.439123475952975
- type: nauc_ndcg_at_20_max
value: 35.038091592005536
- type: nauc_ndcg_at_20_std
value: -8.792780272975662
- type: nauc_ndcg_at_3_diff1
value: 60.2950660942529
- type: nauc_ndcg_at_3_max
value: 30.257013442417087
- type: nauc_ndcg_at_3_std
value: -13.671873921177202
- type: nauc_ndcg_at_5_diff1
value: 60.04926753266181
- type: nauc_ndcg_at_5_max
value: 33.00050110783418
- type: nauc_ndcg_at_5_std
value: -10.293915982801868
- type: nauc_precision_at_1000_diff1
value: 65.86104527280983
- type: nauc_precision_at_1000_max
value: 92.22150398620967
- type: nauc_precision_at_1000_std
value: 80.3718068423948
- type: nauc_precision_at_100_diff1
value: 61.343931511998676
- type: nauc_precision_at_100_max
value: 77.89479428134884
- type: nauc_precision_at_100_std
value: 53.242509124861904
- type: nauc_precision_at_10_diff1
value: 58.498529223685814
- type: nauc_precision_at_10_max
value: 48.5105315454464
- type: nauc_precision_at_10_std
value: -0.8844333821952514
- type: nauc_precision_at_1_diff1
value: 63.66949701943299
- type: nauc_precision_at_1_max
value: 28.133811910672573
- type: nauc_precision_at_1_std
value: -14.453510006377535
- type: nauc_precision_at_20_diff1
value: 62.21692302833121
- type: nauc_precision_at_20_max
value: 56.42904519756148
- type: nauc_precision_at_20_std
value: 11.768421717570398
- type: nauc_precision_at_3_diff1
value: 57.386050314704676
- type: nauc_precision_at_3_max
value: 31.63922112989413
- type: nauc_precision_at_3_std
value: -12.983862277916117
- type: nauc_precision_at_5_diff1
value: 56.111301892551865
- type: nauc_precision_at_5_max
value: 39.97271825396829
- type: nauc_precision_at_5_std
value: -2.9622634310133646
- type: nauc_recall_at_1000_diff1
value: 65.86104527280992
- type: nauc_recall_at_1000_max
value: 92.22150398620987
- type: nauc_recall_at_1000_std
value: 80.37180684239502
- type: nauc_recall_at_100_diff1
value: 61.34393151199862
- type: nauc_recall_at_100_max
value: 77.89479428134887
- type: nauc_recall_at_100_std
value: 53.242509124862025
- type: nauc_recall_at_10_diff1
value: 58.49852922368592
- type: nauc_recall_at_10_max
value: 48.51053154544651
- type: nauc_recall_at_10_std
value: -0.8844333821952685
- type: nauc_recall_at_1_diff1
value: 63.66949701943299
- type: nauc_recall_at_1_max
value: 28.133811910672573
- type: nauc_recall_at_1_std
value: -14.453510006377535
- type: nauc_recall_at_20_diff1
value: 62.216923028331315
- type: nauc_recall_at_20_max
value: 56.429045197561635
- type: nauc_recall_at_20_std
value: 11.768421717570599
- type: nauc_recall_at_3_diff1
value: 57.38605031470464
- type: nauc_recall_at_3_max
value: 31.639221129894047
- type: nauc_recall_at_3_std
value: -12.983862277916192
- type: nauc_recall_at_5_diff1
value: 56.111301892551865
- type: nauc_recall_at_5_max
value: 39.97271825396825
- type: nauc_recall_at_5_std
value: -2.962263431013432
- type: ndcg_at_1
value: 46.800000000000004
- type: ndcg_at_10
value: 62.514
- type: ndcg_at_100
value: 65.22
- type: ndcg_at_1000
value: 65.717
- type: ndcg_at_20
value: 63.778999999999996
- type: ndcg_at_3
value: 56.58800000000001
- type: ndcg_at_5
value: 60.039
- type: precision_at_1
value: 46.800000000000004
- type: precision_at_10
value: 7.960000000000001
- type: precision_at_100
value: 0.923
- type: precision_at_1000
value: 0.096
- type: precision_at_20
value: 4.2250000000000005
- type: precision_at_3
value: 21.2
- type: precision_at_5
value: 14.399999999999999
- type: recall_at_1
value: 46.800000000000004
- type: recall_at_10
value: 79.60000000000001
- type: recall_at_100
value: 92.30000000000001
- type: recall_at_1000
value: 96.1
- type: recall_at_20
value: 84.5
- type: recall_at_3
value: 63.6
- type: recall_at_5
value: 72.0
task:
type: Retrieval
- dataset:
config: default
name: MTEB IFlyTek
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
split: validation
type: C-MTEB/IFlyTek-classification
metrics:
- type: accuracy
value: 49.018853405155824
- type: f1
value: 36.34797570897239
- type: f1_weighted
value: 46.595946626038284
- type: main_score
value: 49.018853405155824
task:
type: Classification
- dataset:
config: default
name: MTEB JDReview
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
split: test
type: C-MTEB/JDReview-classification
metrics:
- type: accuracy
value: 80.76923076923077
- type: ap
value: 43.91219315273788
- type: ap_weighted
value: 43.91219315273788
- type: f1
value: 74.3959076760867
- type: f1_weighted
value: 82.41054854790659
- type: main_score
value: 80.76923076923077
task:
type: Classification
- dataset:
config: default
name: MTEB LCQMC
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
split: test
type: C-MTEB/LCQMC
metrics:
- type: cosine_pearson
value: 66.42169614903062
- type: cosine_spearman
value: 69.6209380589209
- type: euclidean_pearson
value: 68.13684291689385
- type: euclidean_spearman
value: 69.62093584082648
- type: main_score
value: 69.6209380589209
- type: manhattan_pearson
value: 67.98872700847923
- type: manhattan_spearman
value: 69.46732039256112
- type: pearson
value: 66.42169614903062
- type: spearman
value: 69.6209380589209
task:
type: STS
- dataset:
config: default
name: MTEB MMarcoReranking
revision: 8e0c766dbe9e16e1d221116a3f36795fbade07f6
split: dev
type: C-MTEB/Mmarco-reranking
metrics:
- type: main_score
value: 28.40392786552284
- type: map
value: 28.40392786552284
- type: mrr
value: 26.729761904761908
- type: nAUC_map_diff1
value: 11.013649297702722
- type: nAUC_map_max
value: 10.17419646298121
- type: nAUC_map_std
value: -0.8563449479185579
- type: nAUC_mrr_diff1
value: 10.279159084348438
- type: nAUC_mrr_max
value: 9.945986054772508
- type: nAUC_mrr_std
value: -0.7829405326492496
task:
type: Reranking
- dataset:
config: default
name: MTEB MMarcoRetrieval
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
split: dev
type: C-MTEB/MMarcoRetrieval
metrics:
- type: main_score
value: 78.527
- type: map_at_1
value: 65.179
- type: map_at_10
value: 74.603
- type: map_at_100
value: 74.957
- type: map_at_1000
value: 74.967
- type: map_at_20
value: 74.857
- type: map_at_3
value: 72.611
- type: map_at_5
value: 73.916
- type: mrr_at_1
value: 67.44985673352436
- type: mrr_at_10
value: 75.1962125346795
- type: mrr_at_100
value: 75.50889677029757
- type: mrr_at_1000
value: 75.51801374083685
- type: mrr_at_20
value: 75.42326193241115
- type: mrr_at_3
value: 73.4670487106018
- type: mrr_at_5
value: 74.58166189111732
- type: nauc_map_at_1000_diff1
value: 77.3975532985191
- type: nauc_map_at_1000_max
value: 38.64013999373193
- type: nauc_map_at_1000_std
value: -18.151216910688003
- type: nauc_map_at_100_diff1
value: 77.39458303918599
- type: nauc_map_at_100_max
value: 38.65525502999619
- type: nauc_map_at_100_std
value: -18.12441923873744
- type: nauc_map_at_10_diff1
value: 77.23576973574656
- type: nauc_map_at_10_max
value: 38.79698916303308
- type: nauc_map_at_10_std
value: -18.205472833807896
- type: nauc_map_at_1_diff1
value: 79.56817309653695
- type: nauc_map_at_1_max
value: 30.973318622760697
- type: nauc_map_at_1_std
value: -24.193358631119697
- type: nauc_map_at_20_diff1
value: 77.345553469177
- type: nauc_map_at_20_max
value: 38.72033702371551
- type: nauc_map_at_20_std
value: -18.10235546630277
- type: nauc_map_at_3_diff1
value: 77.1519821962318
- type: nauc_map_at_3_max
value: 37.252293129620995
- type: nauc_map_at_3_std
value: -19.84875198107134
- type: nauc_map_at_5_diff1
value: 77.2287177052444
- type: nauc_map_at_5_max
value: 38.476432730452075
- type: nauc_map_at_5_std
value: -18.833903805578974
- type: nauc_mrr_at_1000_diff1
value: 77.60661485922789
- type: nauc_mrr_at_1000_max
value: 39.26857638609446
- type: nauc_mrr_at_1000_std
value: -17.210038373130672
- type: nauc_mrr_at_100_diff1
value: 77.6047988273367
- type: nauc_mrr_at_100_max
value: 39.28361327448562
- type: nauc_mrr_at_100_std
value: -17.182790454560294
- type: nauc_mrr_at_10_diff1
value: 77.44371207652814
- type: nauc_mrr_at_10_max
value: 39.432881586168236
- type: nauc_mrr_at_10_std
value: -17.187536228701045
- type: nauc_mrr_at_1_diff1
value: 80.1195041268915
- type: nauc_mrr_at_1_max
value: 34.89315898346597
- type: nauc_mrr_at_1_std
value: -22.677099986196357
- type: nauc_mrr_at_20_diff1
value: 77.56988644291731
- type: nauc_mrr_at_20_max
value: 39.36167345604126
- type: nauc_mrr_at_20_std
value: -17.145663178457347
- type: nauc_mrr_at_3_diff1
value: 77.39068122320302
- type: nauc_mrr_at_3_max
value: 38.47661490489044
- type: nauc_mrr_at_3_std
value: -18.43635735134857
- type: nauc_mrr_at_5_diff1
value: 77.4281674181642
- type: nauc_mrr_at_5_max
value: 39.25097124947119
- type: nauc_mrr_at_5_std
value: -17.602522743868
- type: nauc_ndcg_at_1000_diff1
value: 76.95670356559812
- type: nauc_ndcg_at_1000_max
value: 40.6770789376407
- type: nauc_ndcg_at_1000_std
value: -14.94643027722271
- type: nauc_ndcg_at_100_diff1
value: 76.87957397912506
- type: nauc_ndcg_at_100_max
value: 41.19597481618689
- type: nauc_ndcg_at_100_std
value: -13.986176551639787
- type: nauc_ndcg_at_10_diff1
value: 76.10924614757609
- type: nauc_ndcg_at_10_max
value: 41.944551608825854
- type: nauc_ndcg_at_10_std
value: -14.226261266280796
- type: nauc_ndcg_at_1_diff1
value: 80.1195041268915
- type: nauc_ndcg_at_1_max
value: 34.89315898346597
- type: nauc_ndcg_at_1_std
value: -22.677099986196357
- type: nauc_ndcg_at_20_diff1
value: 76.54328645801156
- type: nauc_ndcg_at_20_max
value: 41.74852133446564
- type: nauc_ndcg_at_20_std
value: -13.721836426277093
- type: nauc_ndcg_at_3_diff1
value: 76.10773063555531
- type: nauc_ndcg_at_3_max
value: 38.87928533895388
- type: nauc_ndcg_at_3_std
value: -17.814064081229805
- type: nauc_ndcg_at_5_diff1
value: 76.12333455766735
- type: nauc_ndcg_at_5_max
value: 41.0111924070866
- type: nauc_ndcg_at_5_std
value: -15.867928392632393
- type: nauc_precision_at_1000_diff1
value: -16.14969196445021
- type: nauc_precision_at_1000_max
value: 19.73159766274731
- type: nauc_precision_at_1000_std
value: 27.142682237659233
- type: nauc_precision_at_100_diff1
value: -2.7404602427028384
- type: nauc_precision_at_100_max
value: 29.32737928846563
- type: nauc_precision_at_100_std
value: 31.47152367892466
- type: nauc_precision_at_10_diff1
value: 22.989404353424035
- type: nauc_precision_at_10_max
value: 41.47175896072229
- type: nauc_precision_at_10_std
value: 17.23968993050545
- type: nauc_precision_at_1_diff1
value: 80.1195041268915
- type: nauc_precision_at_1_max
value: 34.89315898346597
- type: nauc_precision_at_1_std
value: -22.677099986196357
- type: nauc_precision_at_20_diff1
value: 11.7431142315164
- type: nauc_precision_at_20_max
value: 37.384349885824264
- type: nauc_precision_at_20_std
value: 25.87695876238002
- type: nauc_precision_at_3_diff1
value: 47.30485784652924
- type: nauc_precision_at_3_max
value: 39.30794798179377
- type: nauc_precision_at_3_std
value: -3.0460303025064817
- type: nauc_precision_at_5_diff1
value: 35.666358661107026
- type: nauc_precision_at_5_max
value: 41.154619102386434
- type: nauc_precision_at_5_std
value: 6.165343239340201
- type: nauc_recall_at_1000_diff1
value: 70.47489516037629
- type: nauc_recall_at_1000_max
value: 86.38892936750754
- type: nauc_recall_at_1000_std
value: 71.41939627488728
- type: nauc_recall_at_100_diff1
value: 71.35454604674862
- type: nauc_recall_at_100_max
value: 78.8056119793468
- type: nauc_recall_at_100_std
value: 56.673602022438885
- type: nauc_recall_at_10_diff1
value: 68.01157430899912
- type: nauc_recall_at_10_max
value: 61.03890280082228
- type: nauc_recall_at_10_std
value: 10.215903390979168
- type: nauc_recall_at_1_diff1
value: 79.56817309653695
- type: nauc_recall_at_1_max
value: 30.973318622760697
- type: nauc_recall_at_1_std
value: -24.193358631119697
- type: nauc_recall_at_20_diff1
value: 68.89627277773923
- type: nauc_recall_at_20_max
value: 68.37263311017512
- type: nauc_recall_at_20_std
value: 26.936453327892735
- type: nauc_recall_at_3_diff1
value: 71.48557875771924
- type: nauc_recall_at_3_max
value: 42.86820384579516
- type: nauc_recall_at_3_std
value: -12.098244840151215
- type: nauc_recall_at_5_diff1
value: 70.2043239041581
- type: nauc_recall_at_5_max
value: 51.32402340231743
- type: nauc_recall_at_5_std
value: -3.7213044749573516
- type: ndcg_at_1
value: 67.45
- type: ndcg_at_10
value: 78.527
- type: ndcg_at_100
value: 80.022
- type: ndcg_at_1000
value: 80.295
- type: ndcg_at_20
value: 79.387
- type: ndcg_at_3
value: 74.775
- type: ndcg_at_5
value: 76.955
- type: precision_at_1
value: 67.45
- type: precision_at_10
value: 9.576
- type: precision_at_100
value: 1.03
- type: precision_at_1000
value: 0.105
- type: precision_at_20
value: 4.968
- type: precision_at_3
value: 28.247
- type: precision_at_5
value: 18.12
- type: recall_at_1
value: 65.179
- type: recall_at_10
value: 90.059
- type: recall_at_100
value: 96.612
- type: recall_at_1000
value: 98.761
- type: recall_at_20
value: 93.345
- type: recall_at_3
value: 80.158
- type: recall_at_5
value: 85.33
task:
type: Retrieval
- dataset:
config: zh-CN
name: MTEB MassiveIntentClassification (zh-CN)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 66.3987895090787
- type: f1
value: 64.01687665476737
- type: f1_weighted
value: 65.22982874187167
- type: main_score
value: 66.3987895090787
task:
type: Classification
- dataset:
config: zh-TW
name: MTEB MassiveIntentClassification (zh-TW)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 57.36045729657027
- type: f1
value: 56.21747468274314
- type: f1_weighted
value: 55.328390649701
- type: main_score
value: 57.36045729657027
task:
type: Classification
- dataset:
config: zh-CN
name: MTEB MassiveScenarioClassification (zh-CN)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 72.56893073301949
- type: f1
value: 72.51154136026366
- type: f1_weighted
value: 72.06311963012884
- type: main_score
value: 72.56893073301949
task:
type: Classification
- dataset:
config: zh-TW
name: MTEB MassiveScenarioClassification (zh-TW)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 65.85406859448555
- type: f1
value: 66.48372498308458
- type: f1_weighted
value: 64.55871847643539
- type: main_score
value: 65.85406859448555
task:
type: Classification
- dataset:
config: default
name: MTEB MedicalRetrieval
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
split: dev
type: C-MTEB/MedicalRetrieval
metrics:
- type: main_score
value: 56.908
- type: map_at_1
value: 48.9
- type: map_at_10
value: 54.25
- type: map_at_100
value: 54.83
- type: map_at_1000
value: 54.882
- type: map_at_20
value: 54.56100000000001
- type: map_at_3
value: 52.849999999999994
- type: map_at_5
value: 53.68000000000001
- type: mrr_at_1
value: 48.8
- type: mrr_at_10
value: 54.199801587301565
- type: mrr_at_100
value: 54.77998734976407
- type: mrr_at_1000
value: 54.83211631195485
- type: mrr_at_20
value: 54.5113749215181
- type: mrr_at_3
value: 52.79999999999999
- type: mrr_at_5
value: 53.62999999999998
- type: nauc_map_at_1000_diff1
value: 77.0640933059526
- type: nauc_map_at_1000_max
value: 63.16274968632399
- type: nauc_map_at_1000_std
value: 18.619837049196065
- type: nauc_map_at_100_diff1
value: 77.04445583336185
- type: nauc_map_at_100_max
value: 63.15706393184247
- type: nauc_map_at_100_std
value: 18.64155998589979
- type: nauc_map_at_10_diff1
value: 77.22712088218655
- type: nauc_map_at_10_max
value: 63.30058912930664
- type: nauc_map_at_10_std
value: 18.160155214919893
- type: nauc_map_at_1_diff1
value: 80.61224354354235
- type: nauc_map_at_1_max
value: 62.572123712325435
- type: nauc_map_at_1_std
value: 14.871521237919676
- type: nauc_map_at_20_diff1
value: 77.07286173147263
- type: nauc_map_at_20_max
value: 63.202977088050595
- type: nauc_map_at_20_std
value: 18.57384319939196
- type: nauc_map_at_3_diff1
value: 77.7109995359582
- type: nauc_map_at_3_max
value: 63.78258137206212
- type: nauc_map_at_3_std
value: 18.042684958317746
- type: nauc_map_at_5_diff1
value: 77.5173268034033
- type: nauc_map_at_5_max
value: 63.60896273345633
- type: nauc_map_at_5_std
value: 18.337375109892935
- type: nauc_mrr_at_1000_diff1
value: 77.20209036966065
- type: nauc_mrr_at_1000_max
value: 62.97580811011348
- type: nauc_mrr_at_1000_std
value: 18.44115737398761
- type: nauc_mrr_at_100_diff1
value: 77.18226388841661
- type: nauc_mrr_at_100_max
value: 62.97038456010131
- type: nauc_mrr_at_100_std
value: 18.463125747032876
- type: nauc_mrr_at_10_diff1
value: 77.36328933490991
- type: nauc_mrr_at_10_max
value: 63.11563976266347
- type: nauc_mrr_at_10_std
value: 17.9835435088557
- type: nauc_mrr_at_1_diff1
value: 80.86832719436983
- type: nauc_mrr_at_1_max
value: 62.2229505238464
- type: nauc_mrr_at_1_std
value: 14.538993917649432
- type: nauc_mrr_at_20_diff1
value: 77.2097698787093
- type: nauc_mrr_at_20_max
value: 63.017080064318556
- type: nauc_mrr_at_20_std
value: 18.39623244159318
- type: nauc_mrr_at_3_diff1
value: 77.84444444444445
- type: nauc_mrr_at_3_max
value: 63.60112488521577
- type: nauc_mrr_at_3_std
value: 17.869513314967858
- type: nauc_mrr_at_5_diff1
value: 77.65216072112915
- type: nauc_mrr_at_5_max
value: 63.425697442969195
- type: nauc_mrr_at_5_std
value: 18.162393013741234
- type: nauc_ndcg_at_1000_diff1
value: 75.47130124736644
- type: nauc_ndcg_at_1000_max
value: 62.72720721246217
- type: nauc_ndcg_at_1000_std
value: 21.168388385323816
- type: nauc_ndcg_at_100_diff1
value: 74.89812399955154
- type: nauc_ndcg_at_100_max
value: 62.474891176235936
- type: nauc_ndcg_at_100_std
value: 21.705385352598352
- type: nauc_ndcg_at_10_diff1
value: 75.69785924655157
- type: nauc_ndcg_at_10_max
value: 62.99877901137755
- type: nauc_ndcg_at_10_std
value: 19.277137244210792
- type: nauc_ndcg_at_1_diff1
value: 80.61224354354235
- type: nauc_ndcg_at_1_max
value: 62.572123712325435
- type: nauc_ndcg_at_1_std
value: 14.871521237919676
- type: nauc_ndcg_at_20_diff1
value: 75.0990592321159
- type: nauc_ndcg_at_20_max
value: 62.6109408298258
- type: nauc_ndcg_at_20_std
value: 20.860473361161567
- type: nauc_ndcg_at_3_diff1
value: 76.8207938549394
- type: nauc_ndcg_at_3_max
value: 64.06713431084022
- type: nauc_ndcg_at_3_std
value: 19.115482194273362
- type: nauc_ndcg_at_5_diff1
value: 76.46349661203512
- type: nauc_ndcg_at_5_max
value: 63.75385264512038
- type: nauc_ndcg_at_5_std
value: 19.66201253273682
- type: nauc_precision_at_1000_diff1
value: 59.81158632607264
- type: nauc_precision_at_1000_max
value: 59.760023412349916
- type: nauc_precision_at_1000_std
value: 62.485193082207935
- type: nauc_precision_at_100_diff1
value: 62.08543769977759
- type: nauc_precision_at_100_max
value: 57.926010729102806
- type: nauc_precision_at_100_std
value: 43.01747151823387
- type: nauc_precision_at_10_diff1
value: 70.17035828112795
- type: nauc_precision_at_10_max
value: 61.55881019301375
- type: nauc_precision_at_10_std
value: 22.977660426034763
- type: nauc_precision_at_1_diff1
value: 80.61224354354235
- type: nauc_precision_at_1_max
value: 62.572123712325435
- type: nauc_precision_at_1_std
value: 14.871521237919676
- type: nauc_precision_at_20_diff1
value: 66.83361017733561
- type: nauc_precision_at_20_max
value: 59.54232843146045
- type: nauc_precision_at_20_std
value: 30.852559940015073
- type: nauc_precision_at_3_diff1
value: 74.15534470940514
- type: nauc_precision_at_3_max
value: 64.88848804069414
- type: nauc_precision_at_3_std
value: 22.362855802878954
- type: nauc_precision_at_5_diff1
value: 73.13872413328627
- type: nauc_precision_at_5_max
value: 64.11963501694296
- type: nauc_precision_at_5_std
value: 23.897642502455515
- type: nauc_recall_at_1000_diff1
value: 59.81158632607252
- type: nauc_recall_at_1000_max
value: 59.76002341234993
- type: nauc_recall_at_1000_std
value: 62.48519308220787
- type: nauc_recall_at_100_diff1
value: 62.08543769977762
- type: nauc_recall_at_100_max
value: 57.92601072910286
- type: nauc_recall_at_100_std
value: 43.01747151823391
- type: nauc_recall_at_10_diff1
value: 70.170358281128
- type: nauc_recall_at_10_max
value: 61.55881019301381
- type: nauc_recall_at_10_std
value: 22.97766042603487
- type: nauc_recall_at_1_diff1
value: 80.61224354354235
- type: nauc_recall_at_1_max
value: 62.572123712325435
- type: nauc_recall_at_1_std
value: 14.871521237919676
- type: nauc_recall_at_20_diff1
value: 66.83361017733564
- type: nauc_recall_at_20_max
value: 59.54232843146045
- type: nauc_recall_at_20_std
value: 30.85255994001517
- type: nauc_recall_at_3_diff1
value: 74.15534470940513
- type: nauc_recall_at_3_max
value: 64.88848804069413
- type: nauc_recall_at_3_std
value: 22.362855802878926
- type: nauc_recall_at_5_diff1
value: 73.13872413328633
- type: nauc_recall_at_5_max
value: 64.11963501694305
- type: nauc_recall_at_5_std
value: 23.897642502455604
- type: ndcg_at_1
value: 48.9
- type: ndcg_at_10
value: 56.908
- type: ndcg_at_100
value: 59.992999999999995
- type: ndcg_at_1000
value: 61.583
- type: ndcg_at_20
value: 58.044
- type: ndcg_at_3
value: 54.051
- type: ndcg_at_5
value: 55.54
- type: precision_at_1
value: 48.9
- type: precision_at_10
value: 6.529999999999999
- type: precision_at_100
value: 0.803
- type: precision_at_1000
value: 0.093
- type: precision_at_20
value: 3.49
- type: precision_at_3
value: 19.167
- type: precision_at_5
value: 12.22
- type: recall_at_1
value: 48.9
- type: recall_at_10
value: 65.3
- type: recall_at_100
value: 80.30000000000001
- type: recall_at_1000
value: 93.30000000000001
- type: recall_at_20
value: 69.8
- type: recall_at_3
value: 57.49999999999999
- type: recall_at_5
value: 61.1
task:
type: Retrieval
- dataset:
config: default
name: MTEB MultilingualSentiment
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
split: test
type: C-MTEB/MultilingualSentiment-classification
metrics:
- type: accuracy
value: 73.31666666666668
- type: f1
value: 72.28836634231243
- type: f1_weighted
value: 72.28836634231241
- type: main_score
value: 73.31666666666668
task:
type: Classification
- dataset:
config: default
name: MTEB Ocnli
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
split: validation
type: C-MTEB/OCNLI
metrics:
- type: cosine_accuracy
value: 67.02761234434217
- type: cosine_accuracy_threshold
value: 65.14335870742798
- type: cosine_ap
value: 69.4885294263304
- type: cosine_f1
value: 71.27996381727725
- type: cosine_f1_threshold
value: 58.83575081825256
- type: cosine_precision
value: 62.34177215189873
- type: cosine_recall
value: 83.21013727560718
- type: dot_accuracy
value: 67.02761234434217
- type: dot_accuracy_threshold
value: 65.14337062835693
- type: dot_ap
value: 69.4885294263304
- type: dot_f1
value: 71.27996381727725
- type: dot_f1_threshold
value: 58.83575677871704
- type: dot_precision
value: 62.34177215189873
- type: dot_recall
value: 83.21013727560718
- type: euclidean_accuracy
value: 67.02761234434217
- type: euclidean_accuracy_threshold
value: 83.49447250366211
- type: euclidean_ap
value: 69.4885294263304
- type: euclidean_f1
value: 71.27996381727725
- type: euclidean_f1_threshold
value: 90.7350480556488
- type: euclidean_precision
value: 62.34177215189873
- type: euclidean_recall
value: 83.21013727560718
- type: main_score
value: 69.4885294263304
- type: manhattan_accuracy
value: 66.91932864103953
- type: manhattan_accuracy_threshold
value: 1951.8356323242188
- type: manhattan_ap
value: 69.02432804239183
- type: manhattan_f1
value: 70.89991589571069
- type: manhattan_f1_threshold
value: 2201.4184951782227
- type: manhattan_precision
value: 58.909853249475894
- type: manhattan_recall
value: 89.01795142555439
- type: max_ap
value: 69.4885294263304
- type: max_f1
value: 71.27996381727725
- type: max_precision
value: 62.34177215189873
- type: max_recall
value: 89.01795142555439
- type: similarity_accuracy
value: 67.02761234434217
- type: similarity_accuracy_threshold
value: 65.14335870742798
- type: similarity_ap
value: 69.4885294263304
- type: similarity_f1
value: 71.27996381727725
- type: similarity_f1_threshold
value: 58.83575081825256
- type: similarity_precision
value: 62.34177215189873
- type: similarity_recall
value: 83.21013727560718
task:
type: PairClassification
- dataset:
config: default
name: MTEB OnlineShopping
revision: e610f2ebd179a8fda30ae534c3878750a96db120
split: test
type: C-MTEB/OnlineShopping-classification
metrics:
- type: accuracy
value: 90.09
- type: ap
value: 88.76450265603408
- type: ap_weighted
value: 88.76450265603408
- type: f1
value: 90.08779175324347
- type: f1_weighted
value: 90.08719838771795
- type: main_score
value: 90.09
task:
type: Classification
- dataset:
config: default
name: MTEB PAWSX
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
split: test
type: C-MTEB/PAWSX
metrics:
- type: cosine_pearson
value: 14.271650876491588
- type: cosine_spearman
value: 15.088934657692937
- type: euclidean_pearson
value: 17.64991910323611
- type: euclidean_spearman
value: 15.11015719401991
- type: main_score
value: 15.088934657692937
- type: manhattan_pearson
value: 17.627416265380024
- type: manhattan_spearman
value: 15.186102501045864
- type: pearson
value: 14.271650876491588
- type: spearman
value: 15.088934657692937
task:
type: STS
- dataset:
config: default
name: MTEB QBQTC
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
split: test
type: C-MTEB/QBQTC
metrics:
- type: cosine_pearson
value: 31.42374000164117
- type: cosine_spearman
value: 34.11139115201034
- type: euclidean_pearson
value: 31.86846452982553
- type: euclidean_spearman
value: 34.11160345676575
- type: main_score
value: 34.11139115201034
- type: manhattan_pearson
value: 31.78171047507477
- type: manhattan_spearman
value: 34.03769440675436
- type: pearson
value: 31.42374000164117
- type: spearman
value: 34.11139115201034
task:
type: STS
- dataset:
config: zh
name: MTEB STS22 (zh)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 70.18092147138205
- type: cosine_spearman
value: 69.90638729067848
- type: euclidean_pearson
value: 68.5214594150794
- type: euclidean_spearman
value: 69.8926146345444
- type: main_score
value: 69.90638729067848
- type: manhattan_pearson
value: 68.96098064777406
- type: manhattan_spearman
value: 70.49810937340672
- type: pearson
value: 70.18092147138205
- type: spearman
value: 69.90638729067848
task:
type: STS
- dataset:
config: zh-en
name: MTEB STS22 (zh-en)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 75.65018415517595
- type: cosine_spearman
value: 74.96983110528109
- type: euclidean_pearson
value: 77.0199252096022
- type: euclidean_spearman
value: 75.05313744822759
- type: main_score
value: 74.96983110528109
- type: manhattan_pearson
value: 77.28747618528581
- type: manhattan_spearman
value: 74.95188542213391
- type: pearson
value: 75.65018415517595
- type: spearman
value: 74.96983110528109
task:
type: STS
- dataset:
config: default
name: MTEB STSB
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
split: test
type: C-MTEB/STSB
metrics:
- type: cosine_pearson
value: 78.99516686642797
- type: cosine_spearman
value: 79.32633637626917
- type: euclidean_pearson
value: 78.21051836357536
- type: euclidean_spearman
value: 79.32612616365205
- type: main_score
value: 79.32633637626917
- type: manhattan_pearson
value: 78.18343539953231
- type: manhattan_spearman
value: 79.33355463587682
- type: pearson
value: 78.99516686642797
- type: spearman
value: 79.32633637626917
task:
type: STS
- dataset:
config: default
name: MTEB T2Reranking
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
split: dev
type: C-MTEB/T2Reranking
metrics:
- type: main_score
value: 66.50583475592573
- type: map
value: 66.50583475592573
- type: mrr
value: 76.66814435094733
- type: nAUC_map_diff1
value: -7.531687895205624
- type: nAUC_map_max
value: 31.536810866173976
- type: nAUC_map_std
value: 0.584045198013492
- type: nAUC_mrr_diff1
value: -5.20389538556461
- type: nAUC_mrr_max
value: 26.230205943854155
- type: nAUC_mrr_std
value: -2.321422405480513
task:
type: Reranking
- dataset:
config: default
name: MTEB T2Retrieval
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
split: dev
type: C-MTEB/T2Retrieval
metrics:
- type: main_score
value: 84.048
- type: map_at_1
value: 27.250000000000004
- type: map_at_10
value: 76.43799999999999
- type: map_at_100
value: 80.066
- type: map_at_1000
value: 80.136
- type: map_at_20
value: 79.194
- type: map_at_3
value: 53.787
- type: map_at_5
value: 66.06
- type: mrr_at_1
value: 89.4660704892162
- type: mrr_at_10
value: 92.02673022274553
- type: mrr_at_100
value: 92.11951616133179
- type: mrr_at_1000
value: 92.12325682782645
- type: mrr_at_20
value: 92.08937202287764
- type: mrr_at_3
value: 91.55853644280776
- type: mrr_at_5
value: 91.85947454556089
- type: nauc_map_at_1000_diff1
value: 14.991306664519879
- type: nauc_map_at_1000_max
value: 50.14205870015166
- type: nauc_map_at_1000_std
value: 20.531935138410972
- type: nauc_map_at_100_diff1
value: 14.981377145101368
- type: nauc_map_at_100_max
value: 50.0447401180562
- type: nauc_map_at_100_std
value: 20.47654947572488
- type: nauc_map_at_10_diff1
value: 18.790069500020213
- type: nauc_map_at_10_max
value: 37.18636615175541
- type: nauc_map_at_10_std
value: 4.309216124710264
- type: nauc_map_at_1_diff1
value: 50.94702228516873
- type: nauc_map_at_1_max
value: -23.434673439743328
- type: nauc_map_at_1_std
value: -36.270013046647115
- type: nauc_map_at_20_diff1
value: 15.442991212547918
- type: nauc_map_at_20_max
value: 47.53165224906053
- type: nauc_map_at_20_std
value: 17.091479886176085
- type: nauc_map_at_3_diff1
value: 37.34355641131019
- type: nauc_map_at_3_max
value: -9.627767798276931
- type: nauc_map_at_3_std
value: -33.623788261136816
- type: nauc_map_at_5_diff1
value: 30.08255691506382
- type: nauc_map_at_5_max
value: 7.523532625631027
- type: nauc_map_at_5_std
value: -22.873284280648562
- type: nauc_mrr_at_1000_diff1
value: 48.948136368672685
- type: nauc_mrr_at_1000_max
value: 79.31242146814085
- type: nauc_mrr_at_1000_std
value: 42.09118789494853
- type: nauc_mrr_at_100_diff1
value: 48.95105601935127
- type: nauc_mrr_at_100_max
value: 79.31972489396628
- type: nauc_mrr_at_100_std
value: 42.10749180847621
- type: nauc_mrr_at_10_diff1
value: 48.909737017066334
- type: nauc_mrr_at_10_max
value: 79.438878924473
- type: nauc_mrr_at_10_std
value: 42.22609309864849
- type: nauc_mrr_at_1_diff1
value: 49.17057164590014
- type: nauc_mrr_at_1_max
value: 75.50607518284367
- type: nauc_mrr_at_1_std
value: 36.14082103331818
- type: nauc_mrr_at_20_diff1
value: 48.972145239401705
- type: nauc_mrr_at_20_max
value: 79.37286170468568
- type: nauc_mrr_at_20_std
value: 42.15361640253828
- type: nauc_mrr_at_3_diff1
value: 48.73407413089388
- type: nauc_mrr_at_3_max
value: 79.31526640124694
- type: nauc_mrr_at_3_std
value: 41.87832848049768
- type: nauc_mrr_at_5_diff1
value: 48.92974709753988
- type: nauc_mrr_at_5_max
value: 79.52029263445817
- type: nauc_mrr_at_5_std
value: 42.2387927929394
- type: nauc_ndcg_at_1000_diff1
value: 19.852159219940212
- type: nauc_ndcg_at_1000_max
value: 61.78867818911231
- type: nauc_ndcg_at_1000_std
value: 33.12786556649802
- type: nauc_ndcg_at_100_diff1
value: 19.3709781000508
- type: nauc_ndcg_at_100_max
value: 60.84802300919614
- type: nauc_ndcg_at_100_std
value: 33.09600270707079
- type: nauc_ndcg_at_10_diff1
value: 18.890624683095215
- type: nauc_ndcg_at_10_max
value: 52.07035400648073
- type: nauc_ndcg_at_10_std
value: 21.215632742092755
- type: nauc_ndcg_at_1_diff1
value: 49.17057164590014
- type: nauc_ndcg_at_1_max
value: 75.50607518284367
- type: nauc_ndcg_at_1_std
value: 36.14082103331818
- type: nauc_ndcg_at_20_diff1
value: 19.15746849253811
- type: nauc_ndcg_at_20_max
value: 55.82176951048079
- type: nauc_ndcg_at_20_std
value: 26.477040534373803
- type: nauc_ndcg_at_3_diff1
value: 15.61757086504063
- type: nauc_ndcg_at_3_max
value: 66.07148250075376
- type: nauc_ndcg_at_3_std
value: 33.08315717230347
- type: nauc_ndcg_at_5_diff1
value: 15.934068427718106
- type: nauc_ndcg_at_5_max
value: 59.64275100530712
- type: nauc_ndcg_at_5_std
value: 28.197929106012136
- type: nauc_precision_at_1000_diff1
value: -32.14239275674187
- type: nauc_precision_at_1000_max
value: 49.003598734673425
- type: nauc_precision_at_1000_std
value: 60.77307108185476
- type: nauc_precision_at_100_diff1
value: -32.110716229470334
- type: nauc_precision_at_100_max
value: 50.85328281382415
- type: nauc_precision_at_100_std
value: 62.32808109717699
- type: nauc_precision_at_10_diff1
value: -31.837193489485628
- type: nauc_precision_at_10_max
value: 55.83705208493232
- type: nauc_precision_at_10_std
value: 57.50283019666919
- type: nauc_precision_at_1_diff1
value: 49.17057164590014
- type: nauc_precision_at_1_max
value: 75.50607518284367
- type: nauc_precision_at_1_std
value: 36.14082103331818
- type: nauc_precision_at_20_diff1
value: -32.044968169611735
- type: nauc_precision_at_20_max
value: 53.82174008549685
- type: nauc_precision_at_20_std
value: 61.46528672131028
- type: nauc_precision_at_3_diff1
value: -26.261125878602332
- type: nauc_precision_at_3_max
value: 66.0859983928659
- type: nauc_precision_at_3_std
value: 48.83715827055477
- type: nauc_precision_at_5_diff1
value: -31.13291937399241
- type: nauc_precision_at_5_max
value: 61.01429282172497
- type: nauc_precision_at_5_std
value: 52.76320524351461
- type: nauc_recall_at_1000_diff1
value: 6.214349212436889
- type: nauc_recall_at_1000_max
value: 59.08096875098299
- type: nauc_recall_at_1000_std
value: 62.01528677223324
- type: nauc_recall_at_100_diff1
value: 9.456254682836157
- type: nauc_recall_at_100_max
value: 53.09669357470267
- type: nauc_recall_at_100_std
value: 47.19170803245384
- type: nauc_recall_at_10_diff1
value: 17.067819451151244
- type: nauc_recall_at_10_max
value: 26.995954619298562
- type: nauc_recall_at_10_std
value: -1.358304137922756
- type: nauc_recall_at_1_diff1
value: 50.94702228516873
- type: nauc_recall_at_1_max
value: -23.434673439743328
- type: nauc_recall_at_1_std
value: -36.270013046647115
- type: nauc_recall_at_20_diff1
value: 12.166170322330789
- type: nauc_recall_at_20_max
value: 41.98372262379903
- type: nauc_recall_at_20_std
value: 21.231284446488473
- type: nauc_recall_at_3_diff1
value: 35.585610972927654
- type: nauc_recall_at_3_max
value: -14.184820983265075
- type: nauc_recall_at_3_std
value: -36.14847855262556
- type: nauc_recall_at_5_diff1
value: 29.050625754040084
- type: nauc_recall_at_5_max
value: -1.0410932842186966
- type: nauc_recall_at_5_std
value: -28.261646321102425
- type: ndcg_at_1
value: 89.46600000000001
- type: ndcg_at_10
value: 84.048
- type: ndcg_at_100
value: 87.69
- type: ndcg_at_1000
value: 88.369
- type: ndcg_at_20
value: 85.819
- type: ndcg_at_3
value: 85.473
- type: ndcg_at_5
value: 84.048
- type: precision_at_1
value: 89.46600000000001
- type: precision_at_10
value: 41.772
- type: precision_at_100
value: 4.993
- type: precision_at_1000
value: 0.515
- type: precision_at_20
value: 23.202
- type: precision_at_3
value: 74.779
- type: precision_at_5
value: 62.63999999999999
- type: recall_at_1
value: 27.250000000000004
- type: recall_at_10
value: 82.934
- type: recall_at_100
value: 94.815
- type: recall_at_1000
value: 98.294
- type: recall_at_20
value: 88.883
- type: recall_at_3
value: 55.458
- type: recall_at_5
value: 69.465
task:
type: Retrieval
- dataset:
config: default
name: MTEB TNews
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
split: validation
type: C-MTEB/TNews-classification
metrics:
- type: accuracy
value: 51.577000000000005
- type: f1
value: 49.3938790995325
- type: f1_weighted
value: 51.49872910589875
- type: main_score
value: 51.577000000000005
task:
type: Classification
- dataset:
config: default
name: MTEB ThuNewsClusteringP2P
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
split: test
type: C-MTEB/ThuNewsClusteringP2P
metrics:
- type: main_score
value: 61.3311446133969
- type: v_measure
value: 61.3311446133969
- type: v_measure_std
value: 1.4292037065102101
task:
type: Clustering
- dataset:
config: default
name: MTEB ThuNewsClusteringS2S
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
split: test
type: C-MTEB/ThuNewsClusteringS2S
metrics:
- type: main_score
value: 56.41668748695762
- type: v_measure
value: 56.41668748695762
- type: v_measure_std
value: 1.096715523512711
task:
type: Clustering
- dataset:
config: default
name: MTEB VideoRetrieval
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
split: dev
type: C-MTEB/VideoRetrieval
metrics:
- type: main_score
value: 70.078
- type: map_at_1
value: 55.60000000000001
- type: map_at_10
value: 65.45100000000001
- type: map_at_100
value: 65.972
- type: map_at_1000
value: 65.983
- type: map_at_20
value: 65.807
- type: map_at_3
value: 63.233
- type: map_at_5
value: 64.66300000000001
- type: mrr_at_1
value: 55.60000000000001
- type: mrr_at_10
value: 65.4510714285715
- type: mrr_at_100
value: 65.97165962076099
- type: mrr_at_1000
value: 65.98320753283919
- type: mrr_at_20
value: 65.80718845439051
- type: mrr_at_3
value: 63.23333333333338
- type: mrr_at_5
value: 64.6633333333334
- type: nauc_map_at_1000_diff1
value: 61.870954535069615
- type: nauc_map_at_1000_max
value: 23.090300594918375
- type: nauc_map_at_1000_std
value: -37.76103949466824
- type: nauc_map_at_100_diff1
value: 61.86086531015621
- type: nauc_map_at_100_max
value: 23.103916177822935
- type: nauc_map_at_100_std
value: -37.754472602108585
- type: nauc_map_at_10_diff1
value: 61.95721168001316
- type: nauc_map_at_10_max
value: 22.895572163222226
- type: nauc_map_at_10_std
value: -38.336243891701066
- type: nauc_map_at_1_diff1
value: 64.2441219535636
- type: nauc_map_at_1_max
value: 20.64015444888544
- type: nauc_map_at_1_std
value: -35.13259877775077
- type: nauc_map_at_20_diff1
value: 61.843808986063124
- type: nauc_map_at_20_max
value: 23.043585376021333
- type: nauc_map_at_20_std
value: -37.96548127355041
- type: nauc_map_at_3_diff1
value: 61.69619207556679
- type: nauc_map_at_3_max
value: 23.42210304941044
- type: nauc_map_at_3_std
value: -38.25191353860321
- type: nauc_map_at_5_diff1
value: 61.86402019020591
- type: nauc_map_at_5_max
value: 22.978407043164168
- type: nauc_map_at_5_std
value: -38.543794878087006
- type: nauc_mrr_at_1000_diff1
value: 61.870954535069615
- type: nauc_mrr_at_1000_max
value: 23.090300594918375
- type: nauc_mrr_at_1000_std
value: -37.76103949466824
- type: nauc_mrr_at_100_diff1
value: 61.86086531015621
- type: nauc_mrr_at_100_max
value: 23.103916177822935
- type: nauc_mrr_at_100_std
value: -37.754472602108585
- type: nauc_mrr_at_10_diff1
value: 61.95721168001316
- type: nauc_mrr_at_10_max
value: 22.895572163222226
- type: nauc_mrr_at_10_std
value: -38.336243891701066
- type: nauc_mrr_at_1_diff1
value: 64.2441219535636
- type: nauc_mrr_at_1_max
value: 20.64015444888544
- type: nauc_mrr_at_1_std
value: -35.13259877775077
- type: nauc_mrr_at_20_diff1
value: 61.843808986063124
- type: nauc_mrr_at_20_max
value: 23.043585376021333
- type: nauc_mrr_at_20_std
value: -37.96548127355041
- type: nauc_mrr_at_3_diff1
value: 61.69619207556679
- type: nauc_mrr_at_3_max
value: 23.42210304941044
- type: nauc_mrr_at_3_std
value: -38.25191353860321
- type: nauc_mrr_at_5_diff1
value: 61.86402019020591
- type: nauc_mrr_at_5_max
value: 22.978407043164168
- type: nauc_mrr_at_5_std
value: -38.543794878087006
- type: nauc_ndcg_at_1000_diff1
value: 61.29794077219897
- type: nauc_ndcg_at_1000_max
value: 24.418905186535554
- type: nauc_ndcg_at_1000_std
value: -36.38675333575123
- type: nauc_ndcg_at_100_diff1
value: 61.01225965851154
- type: nauc_ndcg_at_100_max
value: 24.921415589027195
- type: nauc_ndcg_at_100_std
value: -36.16549229025807
- type: nauc_ndcg_at_10_diff1
value: 61.49476150514672
- type: nauc_ndcg_at_10_max
value: 23.679233291979195
- type: nauc_ndcg_at_10_std
value: -39.526250662147326
- type: nauc_ndcg_at_1_diff1
value: 64.2441219535636
- type: nauc_ndcg_at_1_max
value: 20.64015444888544
- type: nauc_ndcg_at_1_std
value: -35.13259877775077
- type: nauc_ndcg_at_20_diff1
value: 61.056344259506254
- type: nauc_ndcg_at_20_max
value: 24.4681696774435
- type: nauc_ndcg_at_20_std
value: -38.002129299338705
- type: nauc_ndcg_at_3_diff1
value: 60.9695336204443
- type: nauc_ndcg_at_3_max
value: 24.561743086278764
- type: nauc_ndcg_at_3_std
value: -39.34620193890538
- type: nauc_ndcg_at_5_diff1
value: 61.28536259871331
- type: nauc_ndcg_at_5_max
value: 23.821597091549947
- type: nauc_ndcg_at_5_std
value: -39.921602604282256
- type: nauc_precision_at_1000_diff1
value: 47.896936552397904
- type: nauc_precision_at_1000_max
value: 66.38433151038132
- type: nauc_precision_at_1000_std
value: 60.53532524120673
- type: nauc_precision_at_100_diff1
value: 44.28363938167843
- type: nauc_precision_at_100_max
value: 64.24732856105429
- type: nauc_precision_at_100_std
value: 17.97489366116728
- type: nauc_precision_at_10_diff1
value: 59.41726414200426
- type: nauc_precision_at_10_max
value: 27.71264331511937
- type: nauc_precision_at_10_std
value: -45.74776538959631
- type: nauc_precision_at_1_diff1
value: 64.2441219535636
- type: nauc_precision_at_1_max
value: 20.64015444888544
- type: nauc_precision_at_1_std
value: -35.13259877775077
- type: nauc_precision_at_20_diff1
value: 54.97651111807045
- type: nauc_precision_at_20_max
value: 36.89454610531955
- type: nauc_precision_at_20_std
value: -34.89329336495018
- type: nauc_precision_at_3_diff1
value: 58.51696906840075
- type: nauc_precision_at_3_max
value: 28.574341882931513
- type: nauc_precision_at_3_std
value: -43.137791865257384
- type: nauc_precision_at_5_diff1
value: 59.104993686253025
- type: nauc_precision_at_5_max
value: 27.228062999541013
- type: nauc_precision_at_5_std
value: -45.6178316381737
- type: nauc_recall_at_1000_diff1
value: 47.89693655239931
- type: nauc_recall_at_1000_max
value: 66.38433151038168
- type: nauc_recall_at_1000_std
value: 60.53532524120724
- type: nauc_recall_at_100_diff1
value: 44.28363938167848
- type: nauc_recall_at_100_max
value: 64.24732856105405
- type: nauc_recall_at_100_std
value: 17.974893661168153
- type: nauc_recall_at_10_diff1
value: 59.417264142004434
- type: nauc_recall_at_10_max
value: 27.7126433151196
- type: nauc_recall_at_10_std
value: -45.74776538959598
- type: nauc_recall_at_1_diff1
value: 64.2441219535636
- type: nauc_recall_at_1_max
value: 20.64015444888544
- type: nauc_recall_at_1_std
value: -35.13259877775077
- type: nauc_recall_at_20_diff1
value: 54.97651111807084
- type: nauc_recall_at_20_max
value: 36.89454610531971
- type: nauc_recall_at_20_std
value: -34.89329336495006
- type: nauc_recall_at_3_diff1
value: 58.51696906840065
- type: nauc_recall_at_3_max
value: 28.574341882931524
- type: nauc_recall_at_3_std
value: -43.13779186525737
- type: nauc_recall_at_5_diff1
value: 59.104993686253046
- type: nauc_recall_at_5_max
value: 27.228062999540985
- type: nauc_recall_at_5_std
value: -45.617831638173556
- type: ndcg_at_1
value: 55.60000000000001
- type: ndcg_at_10
value: 70.078
- type: ndcg_at_100
value: 72.489
- type: ndcg_at_1000
value: 72.794
- type: ndcg_at_20
value: 71.354
- type: ndcg_at_3
value: 65.645
- type: ndcg_at_5
value: 68.189
- type: precision_at_1
value: 55.60000000000001
- type: precision_at_10
value: 8.450000000000001
- type: precision_at_100
value: 0.955
- type: precision_at_1000
value: 0.098
- type: precision_at_20
value: 4.475
- type: precision_at_3
value: 24.2
- type: precision_at_5
value: 15.740000000000002
- type: recall_at_1
value: 55.60000000000001
- type: recall_at_10
value: 84.5
- type: recall_at_100
value: 95.5
- type: recall_at_1000
value: 97.89999999999999
- type: recall_at_20
value: 89.5
- type: recall_at_3
value: 72.6
- type: recall_at_5
value: 78.7
task:
type: Retrieval
- dataset:
config: default
name: MTEB Waimai
revision: 339287def212450dcaa9df8c22bf93e9980c7023
split: test
type: C-MTEB/waimai-classification
metrics:
- type: accuracy
value: 85.75999999999999
- type: ap
value: 68.22514159752903
- type: ap_weighted
value: 68.22514159752903
- type: f1
value: 83.93158616293009
- type: f1_weighted
value: 85.8229689427759
- type: main_score
value: 85.75999999999999
task:
type: Classification
- dataset:
config: default
name: MTEB AlloProfClusteringP2P
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
split: test
type: lyon-nlp/alloprof
metrics:
- type: main_score
value: 66.69235568790974
- type: v_measure
value: 66.69235568790974
- type: v_measure_std
value: 2.537794350741746
task:
type: Clustering
- dataset:
config: default
name: MTEB AlloProfClusteringS2S
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
split: test
type: lyon-nlp/alloprof
metrics:
- type: main_score
value: 49.27280056656315
- type: v_measure
value: 49.27280056656315
- type: v_measure_std
value: 3.2810861239751716
task:
type: Clustering
- dataset:
config: default
name: MTEB AlloprofReranking
revision: 65393d0d7a08a10b4e348135e824f385d420b0fd
split: test
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
metrics:
- type: main_score
value: 74.05051363767075
- type: map
value: 74.05051363767075
- type: mrr
value: 75.32834046111249
- type: nAUC_map_diff1
value: 53.43142734542149
- type: nAUC_map_max
value: 10.45363593380914
- type: nAUC_map_std
value: 18.04797969501808
- type: nAUC_mrr_diff1
value: 52.84895215306421
- type: nAUC_mrr_max
value: 11.161569184920731
- type: nAUC_mrr_std
value: 18.116278051231706
task:
type: Reranking
- dataset:
config: default
name: MTEB AlloprofRetrieval
revision: fcf295ea64c750f41fadbaa37b9b861558e1bfbd
split: test
type: lyon-nlp/alloprof
metrics:
- type: main_score
value: 46.752
- type: map_at_1
value: 29.404000000000003
- type: map_at_10
value: 40.695
- type: map_at_100
value: 41.638999999999996
- type: map_at_1000
value: 41.686
- type: map_at_20
value: 41.293
- type: map_at_3
value: 37.464
- type: map_at_5
value: 39.314
- type: mrr_at_1
value: 29.404145077720205
- type: mrr_at_10
value: 40.69454724895149
- type: mrr_at_100
value: 41.6387718358502
- type: mrr_at_1000
value: 41.686352032537386
- type: mrr_at_20
value: 41.29302173047876
- type: mrr_at_3
value: 37.46401842256771
- type: mrr_at_5
value: 39.314191134139456
- type: nauc_map_at_1000_diff1
value: 36.81140646424009
- type: nauc_map_at_1000_max
value: 32.558382675482015
- type: nauc_map_at_1000_std
value: 1.3209245482601717
- type: nauc_map_at_100_diff1
value: 36.80623533104676
- type: nauc_map_at_100_max
value: 32.58259240121919
- type: nauc_map_at_100_std
value: 1.3357049662565006
- type: nauc_map_at_10_diff1
value: 36.701137264179415
- type: nauc_map_at_10_max
value: 32.39187216040168
- type: nauc_map_at_10_std
value: 1.080168559171855
- type: nauc_map_at_1_diff1
value: 41.17578040220583
- type: nauc_map_at_1_max
value: 29.250697582326456
- type: nauc_map_at_1_std
value: 0.015878420007215115
- type: nauc_map_at_20_diff1
value: 36.78320606729714
- type: nauc_map_at_20_max
value: 32.62394229122364
- type: nauc_map_at_20_std
value: 1.2875500759697867
- type: nauc_map_at_3_diff1
value: 36.61724743709236
- type: nauc_map_at_3_max
value: 31.439128101338948
- type: nauc_map_at_3_std
value: 0.6643615364760862
- type: nauc_map_at_5_diff1
value: 36.51290373132519
- type: nauc_map_at_5_max
value: 32.06362001986431
- type: nauc_map_at_5_std
value: 1.0077803528775056
- type: nauc_mrr_at_1000_diff1
value: 36.81140646424009
- type: nauc_mrr_at_1000_max
value: 32.558382675482015
- type: nauc_mrr_at_1000_std
value: 1.3209245482601717
- type: nauc_mrr_at_100_diff1
value: 36.80623533104676
- type: nauc_mrr_at_100_max
value: 32.58259240121919
- type: nauc_mrr_at_100_std
value: 1.3357049662565006
- type: nauc_mrr_at_10_diff1
value: 36.701137264179415
- type: nauc_mrr_at_10_max
value: 32.39187216040168
- type: nauc_mrr_at_10_std
value: 1.080168559171855
- type: nauc_mrr_at_1_diff1
value: 41.17578040220583
- type: nauc_mrr_at_1_max
value: 29.250697582326456
- type: nauc_mrr_at_1_std
value: 0.015878420007215115
- type: nauc_mrr_at_20_diff1
value: 36.78320606729714
- type: nauc_mrr_at_20_max
value: 32.62394229122364
- type: nauc_mrr_at_20_std
value: 1.2875500759697867
- type: nauc_mrr_at_3_diff1
value: 36.61724743709236
- type: nauc_mrr_at_3_max
value: 31.439128101338948
- type: nauc_mrr_at_3_std
value: 0.6643615364760862
- type: nauc_mrr_at_5_diff1
value: 36.51290373132519
- type: nauc_mrr_at_5_max
value: 32.06362001986431
- type: nauc_mrr_at_5_std
value: 1.0077803528775056
- type: nauc_ndcg_at_1000_diff1
value: 36.24076511538488
- type: nauc_ndcg_at_1000_max
value: 34.064413351133496
- type: nauc_ndcg_at_1000_std
value: 2.4530947188501884
- type: nauc_ndcg_at_100_diff1
value: 36.0927603024548
- type: nauc_ndcg_at_100_max
value: 34.98071528431376
- type: nauc_ndcg_at_100_std
value: 3.2048812019743806
- type: nauc_ndcg_at_10_diff1
value: 35.48231357450575
- type: nauc_ndcg_at_10_max
value: 34.23901754126376
- type: nauc_ndcg_at_10_std
value: 1.8216358086555313
- type: nauc_ndcg_at_1_diff1
value: 41.17578040220583
- type: nauc_ndcg_at_1_max
value: 29.250697582326456
- type: nauc_ndcg_at_1_std
value: 0.015878420007215115
- type: nauc_ndcg_at_20_diff1
value: 35.762077351924866
- type: nauc_ndcg_at_20_max
value: 35.131282428172504
- type: nauc_ndcg_at_20_std
value: 2.6314418022317088
- type: nauc_ndcg_at_3_diff1
value: 35.20458098278931
- type: nauc_ndcg_at_3_max
value: 32.10452974167028
- type: nauc_ndcg_at_3_std
value: 0.8794682266965334
- type: nauc_ndcg_at_5_diff1
value: 34.98508114807989
- type: nauc_ndcg_at_5_max
value: 33.262089912366264
- type: nauc_ndcg_at_5_std
value: 1.5319350722125793
- type: nauc_precision_at_1000_diff1
value: 44.666620982624345
- type: nauc_precision_at_1000_max
value: 75.29393255580452
- type: nauc_precision_at_1000_std
value: 55.59900299317424
- type: nauc_precision_at_100_diff1
value: 34.231014793455486
- type: nauc_precision_at_100_max
value: 57.643182221569056
- type: nauc_precision_at_100_std
value: 24.69069946083384
- type: nauc_precision_at_10_diff1
value: 31.574888849159986
- type: nauc_precision_at_10_max
value: 41.421761956959116
- type: nauc_precision_at_10_std
value: 4.763962617424729
- type: nauc_precision_at_1_diff1
value: 41.17578040220583
- type: nauc_precision_at_1_max
value: 29.250697582326456
- type: nauc_precision_at_1_std
value: 0.015878420007215115
- type: nauc_precision_at_20_diff1
value: 32.180018178061836
- type: nauc_precision_at_20_max
value: 47.75245184649933
- type: nauc_precision_at_20_std
value: 9.788615791772633
- type: nauc_precision_at_3_diff1
value: 31.174995495672274
- type: nauc_precision_at_3_max
value: 33.99858581358525
- type: nauc_precision_at_3_std
value: 1.4974582520924251
- type: nauc_precision_at_5_diff1
value: 30.35676602203525
- type: nauc_precision_at_5_max
value: 37.047443567623354
- type: nauc_precision_at_5_std
value: 3.2312689286293024
- type: nauc_recall_at_1000_diff1
value: 44.666620982624515
- type: nauc_recall_at_1000_max
value: 75.29393255580267
- type: nauc_recall_at_1000_std
value: 55.59900299317372
- type: nauc_recall_at_100_diff1
value: 34.23101479345545
- type: nauc_recall_at_100_max
value: 57.64318222156907
- type: nauc_recall_at_100_std
value: 24.690699460833915
- type: nauc_recall_at_10_diff1
value: 31.574888849159976
- type: nauc_recall_at_10_max
value: 41.42176195695914
- type: nauc_recall_at_10_std
value: 4.763962617424782
- type: nauc_recall_at_1_diff1
value: 41.17578040220583
- type: nauc_recall_at_1_max
value: 29.250697582326456
- type: nauc_recall_at_1_std
value: 0.015878420007215115
- type: nauc_recall_at_20_diff1
value: 32.18001817806187
- type: nauc_recall_at_20_max
value: 47.75245184649934
- type: nauc_recall_at_20_std
value: 9.788615791772733
- type: nauc_recall_at_3_diff1
value: 31.17499549567227
- type: nauc_recall_at_3_max
value: 33.99858581358531
- type: nauc_recall_at_3_std
value: 1.4974582520924073
- type: nauc_recall_at_5_diff1
value: 30.356766022035238
- type: nauc_recall_at_5_max
value: 37.047443567623354
- type: nauc_recall_at_5_std
value: 3.2312689286292806
- type: ndcg_at_1
value: 29.404000000000003
- type: ndcg_at_10
value: 46.752
- type: ndcg_at_100
value: 51.43
- type: ndcg_at_1000
value: 52.76499999999999
- type: ndcg_at_20
value: 48.92
- type: ndcg_at_3
value: 40.106
- type: ndcg_at_5
value: 43.445
- type: precision_at_1
value: 29.404000000000003
- type: precision_at_10
value: 6.601999999999999
- type: precision_at_100
value: 0.881
- type: precision_at_1000
value: 0.099
- type: precision_at_20
value: 3.728
- type: precision_at_3
value: 15.918
- type: precision_at_5
value: 11.174000000000001
- type: recall_at_1
value: 29.404000000000003
- type: recall_at_10
value: 66.019
- type: recall_at_100
value: 88.126
- type: recall_at_1000
value: 98.791
- type: recall_at_20
value: 74.568
- type: recall_at_3
value: 47.754999999999995
- type: recall_at_5
value: 55.872
task:
type: Retrieval
- dataset:
config: fr
name: MTEB AmazonReviewsClassification (fr)
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
split: test
type: mteb/amazon_reviews_multi
metrics:
- type: accuracy
value: 44.847999999999985
- type: f1
value: 41.93605853189159
- type: f1_weighted
value: 41.93605853189159
- type: main_score
value: 44.847999999999985
task:
type: Classification
- dataset:
config: default
name: MTEB BSARDRetrieval
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
split: test
type: maastrichtlawtech/bsard
metrics:
- type: main_score
value: 58.559000000000005
- type: map_at_1
value: 10.36
- type: map_at_10
value: 16.758
- type: map_at_100
value: 17.716
- type: map_at_1000
value: 17.816000000000003
- type: map_at_20
value: 17.221
- type: map_at_3
value: 14.565
- type: map_at_5
value: 15.870999999999999
- type: mrr_at_1
value: 10.36036036036036
- type: mrr_at_10
value: 16.758186758186753
- type: mrr_at_100
value: 17.715800685239955
- type: mrr_at_1000
value: 17.816056728488995
- type: mrr_at_20
value: 17.221227569524782
- type: mrr_at_3
value: 14.564564564564561
- type: mrr_at_5
value: 15.870870870870865
- type: nauc_map_at_1000_diff1
value: 13.581189454277641
- type: nauc_map_at_1000_max
value: 23.489691228117813
- type: nauc_map_at_1000_std
value: 5.6307865456405395
- type: nauc_map_at_100_diff1
value: 13.454198011114709
- type: nauc_map_at_100_max
value: 23.45922415373145
- type: nauc_map_at_100_std
value: 5.616848031628102
- type: nauc_map_at_10_diff1
value: 13.320234520737017
- type: nauc_map_at_10_max
value: 23.234237599237463
- type: nauc_map_at_10_std
value: 4.544384095472259
- type: nauc_map_at_1_diff1
value: 19.723683325024975
- type: nauc_map_at_1_max
value: 20.464053097615416
- type: nauc_map_at_1_std
value: 2.099858103167991
- type: nauc_map_at_20_diff1
value: 13.743084308870731
- type: nauc_map_at_20_max
value: 23.529304709994932
- type: nauc_map_at_20_std
value: 5.326637193786957
- type: nauc_map_at_3_diff1
value: 11.829713917206632
- type: nauc_map_at_3_max
value: 20.982180859889315
- type: nauc_map_at_3_std
value: 2.6604076449483416
- type: nauc_map_at_5_diff1
value: 13.25993802690841
- type: nauc_map_at_5_max
value: 21.63314647686895
- type: nauc_map_at_5_std
value: 2.762539517745844
- type: nauc_mrr_at_1000_diff1
value: 13.581189454277641
- type: nauc_mrr_at_1000_max
value: 23.489691228117813
- type: nauc_mrr_at_1000_std
value: 5.6307865456405395
- type: nauc_mrr_at_100_diff1
value: 13.454198011114709
- type: nauc_mrr_at_100_max
value: 23.45922415373145
- type: nauc_mrr_at_100_std
value: 5.616848031628102
- type: nauc_mrr_at_10_diff1
value: 13.320234520737017
- type: nauc_mrr_at_10_max
value: 23.234237599237463
- type: nauc_mrr_at_10_std
value: 4.544384095472259
- type: nauc_mrr_at_1_diff1
value: 19.723683325024975
- type: nauc_mrr_at_1_max
value: 20.464053097615416
- type: nauc_mrr_at_1_std
value: 2.099858103167991
- type: nauc_mrr_at_20_diff1
value: 13.743084308870731
- type: nauc_mrr_at_20_max
value: 23.529304709994932
- type: nauc_mrr_at_20_std
value: 5.326637193786957
- type: nauc_mrr_at_3_diff1
value: 11.829713917206632
- type: nauc_mrr_at_3_max
value: 20.982180859889315
- type: nauc_mrr_at_3_std
value: 2.6604076449483416
- type: nauc_mrr_at_5_diff1
value: 13.25993802690841
- type: nauc_mrr_at_5_max
value: 21.63314647686895
- type: nauc_mrr_at_5_std
value: 2.762539517745844
- type: nauc_ndcg_at_1000_diff1
value: 13.707503108989783
- type: nauc_ndcg_at_1000_max
value: 25.949859334474194
- type: nauc_ndcg_at_1000_std
value: 11.30077185095291
- type: nauc_ndcg_at_100_diff1
value: 11.488652396242538
- type: nauc_ndcg_at_100_max
value: 25.577496900047457
- type: nauc_ndcg_at_100_std
value: 11.594574152798417
- type: nauc_ndcg_at_10_diff1
value: 12.238261856743057
- type: nauc_ndcg_at_10_max
value: 25.70940084264975
- type: nauc_ndcg_at_10_std
value: 6.674709323258127
- type: nauc_ndcg_at_1_diff1
value: 19.723683325024975
- type: nauc_ndcg_at_1_max
value: 20.464053097615416
- type: nauc_ndcg_at_1_std
value: 2.099858103167991
- type: nauc_ndcg_at_20_diff1
value: 13.554982508741379
- type: nauc_ndcg_at_20_max
value: 26.121920197241778
- type: nauc_ndcg_at_20_std
value: 8.855936872536278
- type: nauc_ndcg_at_3_diff1
value: 9.59924858769597
- type: nauc_ndcg_at_3_max
value: 21.202502594505308
- type: nauc_ndcg_at_3_std
value: 2.9122811723533566
- type: nauc_ndcg_at_5_diff1
value: 12.117243393169327
- type: nauc_ndcg_at_5_max
value: 22.382086327774463
- type: nauc_ndcg_at_5_std
value: 3.068185747546371
- type: nauc_precision_at_1000_diff1
value: 21.314687056528214
- type: nauc_precision_at_1000_max
value: 35.85736416644202
- type: nauc_precision_at_1000_std
value: 41.215589583356014
- type: nauc_precision_at_100_diff1
value: 4.841538567838315
- type: nauc_precision_at_100_max
value: 29.796025601556465
- type: nauc_precision_at_100_std
value: 31.66461426950881
- type: nauc_precision_at_10_diff1
value: 10.2769925656981
- type: nauc_precision_at_10_max
value: 31.610465042792512
- type: nauc_precision_at_10_std
value: 11.729838363348398
- type: nauc_precision_at_1_diff1
value: 19.723683325024975
- type: nauc_precision_at_1_max
value: 20.464053097615416
- type: nauc_precision_at_1_std
value: 2.099858103167991
- type: nauc_precision_at_20_diff1
value: 14.122666091725545
- type: nauc_precision_at_20_max
value: 31.813794575630656
- type: nauc_precision_at_20_std
value: 17.44031269111964
- type: nauc_precision_at_3_diff1
value: 4.41887012868526
- type: nauc_precision_at_3_max
value: 21.73037689396608
- type: nauc_precision_at_3_std
value: 3.5177146563010777
- type: nauc_precision_at_5_diff1
value: 9.911736958870145
- type: nauc_precision_at_5_max
value: 24.17828887763417
- type: nauc_precision_at_5_std
value: 3.758711226096333
- type: nauc_recall_at_1000_diff1
value: 21.314687056528154
- type: nauc_recall_at_1000_max
value: 35.85736416644197
- type: nauc_recall_at_1000_std
value: 41.21558958335586
- type: nauc_recall_at_100_diff1
value: 4.841538567838269
- type: nauc_recall_at_100_max
value: 29.79602560155637
- type: nauc_recall_at_100_std
value: 31.66461426950869
- type: nauc_recall_at_10_diff1
value: 10.276992565698032
- type: nauc_recall_at_10_max
value: 31.610465042792473
- type: nauc_recall_at_10_std
value: 11.729838363348378
- type: nauc_recall_at_1_diff1
value: 19.723683325024975
- type: nauc_recall_at_1_max
value: 20.464053097615416
- type: nauc_recall_at_1_std
value: 2.099858103167991
- type: nauc_recall_at_20_diff1
value: 14.122666091725526
- type: nauc_recall_at_20_max
value: 31.813794575630638
- type: nauc_recall_at_20_std
value: 17.440312691119587
- type: nauc_recall_at_3_diff1
value: 4.4188701286852785
- type: nauc_recall_at_3_max
value: 21.7303768939661
- type: nauc_recall_at_3_std
value: 3.5177146563010853
- type: nauc_recall_at_5_diff1
value: 9.911736958870106
- type: nauc_recall_at_5_max
value: 24.178288877634106
- type: nauc_recall_at_5_std
value: 3.758711226096281
- type: ndcg_at_1
value: 10.36
- type: ndcg_at_10
value: 20.471
- type: ndcg_at_100
value: 25.777
- type: ndcg_at_1000
value: 28.593000000000004
- type: ndcg_at_20
value: 22.246
- type: ndcg_at_3
value: 15.916
- type: ndcg_at_5
value: 18.3
- type: precision_at_1
value: 10.36
- type: precision_at_10
value: 3.243
- type: precision_at_100
value: 0.586
- type: precision_at_1000
value: 0.08099999999999999
- type: precision_at_20
value: 1.982
- type: precision_at_3
value: 6.607
- type: precision_at_5
value: 5.135
- type: recall_at_1
value: 10.36
- type: recall_at_10
value: 32.432
- type: recall_at_100
value: 58.559000000000005
- type: recall_at_1000
value: 81.081
- type: recall_at_20
value: 39.64
- type: recall_at_3
value: 19.82
- type: recall_at_5
value: 25.676
task:
type: Retrieval
- dataset:
config: default
name: MTEB HALClusteringS2S
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
split: test
type: lyon-nlp/clustering-hal-s2s
metrics:
- type: main_score
value: 26.918470641446472
- type: v_measure
value: 26.918470641446472
- type: v_measure_std
value: 2.717665658348912
task:
type: Clustering
- dataset:
config: fr
name: MTEB MLSUMClusteringP2P (fr)
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
split: test
type: reciTAL/mlsum
metrics:
- type: main_score
value: 45.581413658149
- type: v_measure
value: 45.581413658149
- type: v_measure_std
value: 1.646260736751199
task:
type: Clustering
- dataset:
config: fr
name: MTEB MLSUMClusteringS2S (fr)
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
split: test
type: reciTAL/mlsum
metrics:
- type: main_score
value: 44.45455749734905
- type: v_measure
value: 44.45455749734905
- type: v_measure_std
value: 1.935205028548908
task:
type: Clustering
- dataset:
config: fr
name: MTEB MTOPDomainClassification (fr)
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
split: test
type: mteb/mtop_domain
metrics:
- type: accuracy
value: 80.14719699342312
- type: f1
value: 79.68802657402165
- type: f1_weighted
value: 79.85763712873417
- type: main_score
value: 80.14719699342312
task:
type: Classification
- dataset:
config: fr
name: MTEB MTOPIntentClassification (fr)
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
split: test
type: mteb/mtop_intent
metrics:
- type: accuracy
value: 50.241152521139995
- type: f1
value: 34.39524038805554
- type: f1_weighted
value: 53.93775073819592
- type: main_score
value: 50.241152521139995
task:
type: Classification
- dataset:
config: fra
name: MTEB MasakhaNEWSClassification (fra)
revision: 18193f187b92da67168c655c9973a165ed9593dd
split: test
type: mteb/masakhanews
metrics:
- type: accuracy
value: 83.34123222748818
- type: f1
value: 79.48624508308065
- type: f1_weighted
value: 83.20210238500908
- type: main_score
value: 83.34123222748818
task:
type: Classification
- dataset:
config: fra
name: MTEB MasakhaNEWSClusteringP2P (fra)
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
split: test
type: masakhane/masakhanews
metrics:
- type: main_score
value: 71.51218291988776
- type: v_measure
value: 71.51218291988776
- type: v_measure_std
value: 35.6439739308977
task:
type: Clustering
- dataset:
config: fra
name: MTEB MasakhaNEWSClusteringS2S (fra)
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
split: test
type: masakhane/masakhanews
metrics:
- type: main_score
value: 60.155743100795725
- type: v_measure
value: 60.155743100795725
- type: v_measure_std
value: 28.180226808833797
task:
type: Clustering
- dataset:
config: fr
name: MTEB MassiveIntentClassification (fr)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 59.048419636852735
- type: f1
value: 55.77513997227217
- type: f1_weighted
value: 57.65743868976365
- type: main_score
value: 59.048419636852735
task:
type: Classification
- dataset:
config: fr
name: MTEB MassiveScenarioClassification (fr)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 67.2932078009415
- type: f1
value: 66.85444841091169
- type: f1_weighted
value: 66.78952167770717
- type: main_score
value: 67.2932078009415
task:
type: Classification
- dataset:
config: fr
name: MTEB MintakaRetrieval (fr)
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
split: test
type: jinaai/mintakaqa
metrics:
- type: main_score
value: 27.400000000000002
- type: map_at_1
value: 15.479000000000001
- type: map_at_10
value: 23.213
- type: map_at_100
value: 24.285
- type: map_at_1000
value: 24.397
- type: map_at_20
value: 23.741
- type: map_at_3
value: 20.973
- type: map_at_5
value: 22.212
- type: mrr_at_1
value: 15.47911547911548
- type: mrr_at_10
value: 23.21331071331073
- type: mrr_at_100
value: 24.28515184787565
- type: mrr_at_1000
value: 24.396606382776362
- type: mrr_at_20
value: 23.74068872679202
- type: mrr_at_3
value: 20.973245973245938
- type: mrr_at_5
value: 22.211984711984677
- type: nauc_map_at_1000_diff1
value: 23.49190518969821
- type: nauc_map_at_1000_max
value: 21.816535185868748
- type: nauc_map_at_1000_std
value: 7.898426575861743
- type: nauc_map_at_100_diff1
value: 23.455942061154243
- type: nauc_map_at_100_max
value: 21.80945301878854
- type: nauc_map_at_100_std
value: 7.903289282168091
- type: nauc_map_at_10_diff1
value: 23.674951138068714
- type: nauc_map_at_10_max
value: 21.969792385911845
- type: nauc_map_at_10_std
value: 7.889585005426794
- type: nauc_map_at_1_diff1
value: 29.069568522388433
- type: nauc_map_at_1_max
value: 19.942608291469913
- type: nauc_map_at_1_std
value: 3.1283142332992635
- type: nauc_map_at_20_diff1
value: 23.45502400622297
- type: nauc_map_at_20_max
value: 21.830527331051552
- type: nauc_map_at_20_std
value: 7.994053361768913
- type: nauc_map_at_3_diff1
value: 24.982668301358444
- type: nauc_map_at_3_max
value: 21.883837899231867
- type: nauc_map_at_3_std
value: 6.615976792964795
- type: nauc_map_at_5_diff1
value: 24.09866390229764
- type: nauc_map_at_5_max
value: 21.614008493220986
- type: nauc_map_at_5_std
value: 7.272332396807288
- type: nauc_mrr_at_1000_diff1
value: 23.49190518969821
- type: nauc_mrr_at_1000_max
value: 21.816535185868748
- type: nauc_mrr_at_1000_std
value: 7.898426575861743
- type: nauc_mrr_at_100_diff1
value: 23.455942061154243
- type: nauc_mrr_at_100_max
value: 21.80945301878854
- type: nauc_mrr_at_100_std
value: 7.903289282168091
- type: nauc_mrr_at_10_diff1
value: 23.674951138068714
- type: nauc_mrr_at_10_max
value: 21.969792385911845
- type: nauc_mrr_at_10_std
value: 7.889585005426794
- type: nauc_mrr_at_1_diff1
value: 29.069568522388433
- type: nauc_mrr_at_1_max
value: 19.942608291469913
- type: nauc_mrr_at_1_std
value: 3.1283142332992635
- type: nauc_mrr_at_20_diff1
value: 23.45502400622297
- type: nauc_mrr_at_20_max
value: 21.830527331051552
- type: nauc_mrr_at_20_std
value: 7.994053361768913
- type: nauc_mrr_at_3_diff1
value: 24.982668301358444
- type: nauc_mrr_at_3_max
value: 21.883837899231867
- type: nauc_mrr_at_3_std
value: 6.615976792964795
- type: nauc_mrr_at_5_diff1
value: 24.09866390229764
- type: nauc_mrr_at_5_max
value: 21.614008493220986
- type: nauc_mrr_at_5_std
value: 7.272332396807288
- type: nauc_ndcg_at_1000_diff1
value: 21.92872678950541
- type: nauc_ndcg_at_1000_max
value: 22.388970258338958
- type: nauc_ndcg_at_1000_std
value: 9.807006541293186
- type: nauc_ndcg_at_100_diff1
value: 20.903304276761364
- type: nauc_ndcg_at_100_max
value: 22.209897726716065
- type: nauc_ndcg_at_100_std
value: 10.075543107880176
- type: nauc_ndcg_at_10_diff1
value: 21.508944950669097
- type: nauc_ndcg_at_10_max
value: 22.709862035037514
- type: nauc_ndcg_at_10_std
value: 10.00450608801698
- type: nauc_ndcg_at_1_diff1
value: 29.069568522388433
- type: nauc_ndcg_at_1_max
value: 19.942608291469913
- type: nauc_ndcg_at_1_std
value: 3.1283142332992635
- type: nauc_ndcg_at_20_diff1
value: 20.803145422787512
- type: nauc_ndcg_at_20_max
value: 22.310429618526772
- type: nauc_ndcg_at_20_std
value: 10.366058782551438
- type: nauc_ndcg_at_3_diff1
value: 23.913619145125207
- type: nauc_ndcg_at_3_max
value: 22.441574203993245
- type: nauc_ndcg_at_3_std
value: 7.691311158754716
- type: nauc_ndcg_at_5_diff1
value: 22.4840009470751
- type: nauc_ndcg_at_5_max
value: 22.024641703222514
- type: nauc_ndcg_at_5_std
value: 8.803747702599477
- type: nauc_precision_at_1000_diff1
value: 17.037870101460467
- type: nauc_precision_at_1000_max
value: 42.30306938098229
- type: nauc_precision_at_1000_std
value: 54.251307689225115
- type: nauc_precision_at_100_diff1
value: 12.076659360813839
- type: nauc_precision_at_100_max
value: 23.254576247061777
- type: nauc_precision_at_100_std
value: 17.80398606936446
- type: nauc_precision_at_10_diff1
value: 16.05902741145243
- type: nauc_precision_at_10_max
value: 24.536458909415416
- type: nauc_precision_at_10_std
value: 15.281423796153165
- type: nauc_precision_at_1_diff1
value: 29.069568522388433
- type: nauc_precision_at_1_max
value: 19.942608291469913
- type: nauc_precision_at_1_std
value: 3.1283142332992635
- type: nauc_precision_at_20_diff1
value: 13.618514792543918
- type: nauc_precision_at_20_max
value: 23.357417389310335
- type: nauc_precision_at_20_std
value: 16.6297119945886
- type: nauc_precision_at_3_diff1
value: 21.3058697791068
- type: nauc_precision_at_3_max
value: 23.815582518552716
- type: nauc_precision_at_3_std
value: 10.358496834243757
- type: nauc_precision_at_5_diff1
value: 18.54677328441144
- type: nauc_precision_at_5_max
value: 22.987289739937104
- type: nauc_precision_at_5_std
value: 12.591593599364307
- type: nauc_recall_at_1000_diff1
value: 17.03787010146031
- type: nauc_recall_at_1000_max
value: 42.303069380982336
- type: nauc_recall_at_1000_std
value: 54.25130768922508
- type: nauc_recall_at_100_diff1
value: 12.076659360813771
- type: nauc_recall_at_100_max
value: 23.254576247061777
- type: nauc_recall_at_100_std
value: 17.80398606936441
- type: nauc_recall_at_10_diff1
value: 16.05902741145243
- type: nauc_recall_at_10_max
value: 24.536458909415412
- type: nauc_recall_at_10_std
value: 15.281423796153174
- type: nauc_recall_at_1_diff1
value: 29.069568522388433
- type: nauc_recall_at_1_max
value: 19.942608291469913
- type: nauc_recall_at_1_std
value: 3.1283142332992635
- type: nauc_recall_at_20_diff1
value: 13.618514792543923
- type: nauc_recall_at_20_max
value: 23.3574173893104
- type: nauc_recall_at_20_std
value: 16.629711994588593
- type: nauc_recall_at_3_diff1
value: 21.305869779106818
- type: nauc_recall_at_3_max
value: 23.815582518552738
- type: nauc_recall_at_3_std
value: 10.358496834243747
- type: nauc_recall_at_5_diff1
value: 18.546773284411426
- type: nauc_recall_at_5_max
value: 22.987289739937083
- type: nauc_recall_at_5_std
value: 12.591593599364312
- type: ndcg_at_1
value: 15.479000000000001
- type: ndcg_at_10
value: 27.400000000000002
- type: ndcg_at_100
value: 33.382
- type: ndcg_at_1000
value: 36.691
- type: ndcg_at_20
value: 29.352
- type: ndcg_at_3
value: 22.759999999999998
- type: ndcg_at_5
value: 25.006
- type: precision_at_1
value: 15.479000000000001
- type: precision_at_10
value: 4.075
- type: precision_at_100
value: 0.7040000000000001
- type: precision_at_1000
value: 0.097
- type: precision_at_20
value: 2.426
- type: precision_at_3
value: 9.309000000000001
- type: precision_at_5
value: 6.683
- type: recall_at_1
value: 15.479000000000001
- type: recall_at_10
value: 40.745
- type: recall_at_100
value: 70.434
- type: recall_at_1000
value: 97.21499999999999
- type: recall_at_20
value: 48.526
- type: recall_at_3
value: 27.927999999999997
- type: recall_at_5
value: 33.415
task:
type: Retrieval
- dataset:
config: fr
name: MTEB OpusparcusPC (fr)
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
split: test
type: GEM/opusparcus
metrics:
- type: cosine_accuracy
value: 82.42506811989101
- type: cosine_accuracy_threshold
value: 59.91581678390503
- type: cosine_ap
value: 92.86245135331164
- type: cosine_f1
value: 88.0
- type: cosine_f1_threshold
value: 59.91581678390503
- type: cosine_precision
value: 82.76465441819772
- type: cosine_recall
value: 93.94240317775571
- type: dot_accuracy
value: 82.42506811989101
- type: dot_accuracy_threshold
value: 59.91581678390503
- type: dot_ap
value: 92.86245135331164
- type: dot_f1
value: 88.0
- type: dot_f1_threshold
value: 59.91581678390503
- type: dot_precision
value: 82.76465441819772
- type: dot_recall
value: 93.94240317775571
- type: euclidean_accuracy
value: 82.42506811989101
- type: euclidean_accuracy_threshold
value: 89.53677415847778
- type: euclidean_ap
value: 92.86245135331164
- type: euclidean_f1
value: 88.0
- type: euclidean_f1_threshold
value: 89.53677415847778
- type: euclidean_precision
value: 82.76465441819772
- type: euclidean_recall
value: 93.94240317775571
- type: main_score
value: 92.86245135331164
- type: manhattan_accuracy
value: 82.28882833787466
- type: manhattan_accuracy_threshold
value: 2091.843032836914
- type: manhattan_ap
value: 92.84258977975239
- type: manhattan_f1
value: 87.88443616029824
- type: manhattan_f1_threshold
value: 2091.843032836914
- type: manhattan_precision
value: 82.79192273924495
- type: manhattan_recall
value: 93.64448857994041
- type: max_ap
value: 92.86245135331164
- type: max_f1
value: 88.0
- type: max_precision
value: 82.79192273924495
- type: max_recall
value: 93.94240317775571
- type: similarity_accuracy
value: 82.42506811989101
- type: similarity_accuracy_threshold
value: 59.91581678390503
- type: similarity_ap
value: 92.86245135331164
- type: similarity_f1
value: 88.0
- type: similarity_f1_threshold
value: 59.91581678390503
- type: similarity_precision
value: 82.76465441819772
- type: similarity_recall
value: 93.94240317775571
task:
type: PairClassification
- dataset:
config: fr
name: MTEB PawsXPairClassification (fr)
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
split: test
type: google-research-datasets/paws-x
metrics:
- type: cosine_accuracy
value: 61.050000000000004
- type: cosine_accuracy_threshold
value: 98.11633825302124
- type: cosine_ap
value: 60.385395031891264
- type: cosine_f1
value: 62.60428001450852
- type: cosine_f1_threshold
value: 89.5184874534607
- type: cosine_precision
value: 46.54800431499461
- type: cosine_recall
value: 95.5703211517165
- type: dot_accuracy
value: 61.050000000000004
- type: dot_accuracy_threshold
value: 98.11633825302124
- type: dot_ap
value: 60.37120015758097
- type: dot_f1
value: 62.60428001450852
- type: dot_f1_threshold
value: 89.5184874534607
- type: dot_precision
value: 46.54800431499461
- type: dot_recall
value: 95.5703211517165
- type: euclidean_accuracy
value: 61.050000000000004
- type: euclidean_accuracy_threshold
value: 19.409586489200592
- type: euclidean_ap
value: 60.385395031891264
- type: euclidean_f1
value: 62.60428001450852
- type: euclidean_f1_threshold
value: 45.78540325164795
- type: euclidean_precision
value: 46.54800431499461
- type: euclidean_recall
value: 95.5703211517165
- type: main_score
value: 60.61779879922903
- type: manhattan_accuracy
value: 61.0
- type: manhattan_accuracy_threshold
value: 455.7579040527344
- type: manhattan_ap
value: 60.61779879922903
- type: manhattan_f1
value: 62.56448047162859
- type: manhattan_f1_threshold
value: 1030.442714691162
- type: manhattan_precision
value: 46.880176697956934
- type: manhattan_recall
value: 94.01993355481729
- type: max_ap
value: 60.61779879922903
- type: max_f1
value: 62.60428001450852
- type: max_precision
value: 46.880176697956934
- type: max_recall
value: 95.5703211517165
- type: similarity_accuracy
value: 61.050000000000004
- type: similarity_accuracy_threshold
value: 98.11633825302124
- type: similarity_ap
value: 60.385395031891264
- type: similarity_f1
value: 62.60428001450852
- type: similarity_f1_threshold
value: 89.5184874534607
- type: similarity_precision
value: 46.54800431499461
- type: similarity_recall
value: 95.5703211517165
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICKFr
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
split: test
type: Lajavaness/SICK-fr
metrics:
- type: cosine_pearson
value: 81.36950266249254
- type: cosine_spearman
value: 77.4306890341242
- type: euclidean_pearson
value: 77.47472965962992
- type: euclidean_spearman
value: 77.431649040768
- type: main_score
value: 77.4306890341242
- type: manhattan_pearson
value: 77.44468465408777
- type: manhattan_spearman
value: 77.25503240591341
- type: pearson
value: 81.36950266249254
- type: spearman
value: 77.4306890341242
task:
type: STS
- dataset:
config: fr
name: MTEB STS22 (fr)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 81.48671869348665
- type: cosine_spearman
value: 82.57396913836067
- type: euclidean_pearson
value: 81.71206012505978
- type: euclidean_spearman
value: 82.64978141643995
- type: main_score
value: 82.57396913836067
- type: manhattan_pearson
value: 82.22351352342636
- type: manhattan_spearman
value: 83.04856400618516
- type: pearson
value: 81.48671869348665
- type: spearman
value: 82.57396913836067
task:
type: STS
- dataset:
config: de-fr
name: MTEB STS22 (de-fr)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 60.45418014677442
- type: cosine_spearman
value: 64.66584550775643
- type: euclidean_pearson
value: 60.042908719941124
- type: euclidean_spearman
value: 64.66584550775643
- type: main_score
value: 64.66584550775643
- type: manhattan_pearson
value: 58.56106956676841
- type: manhattan_spearman
value: 64.07469227945803
- type: pearson
value: 60.45418014677442
- type: spearman
value: 64.66584550775643
task:
type: STS
- dataset:
config: fr-pl
name: MTEB STS22 (fr-pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 83.39169883126554
- type: cosine_spearman
value: 84.51542547285167
- type: euclidean_pearson
value: 83.79128537281704
- type: euclidean_spearman
value: 84.51542547285167
- type: main_score
value: 84.51542547285167
- type: manhattan_pearson
value: 82.282109060827
- type: manhattan_spearman
value: 84.51542547285167
- type: pearson
value: 83.39169883126554
- type: spearman
value: 84.51542547285167
task:
type: STS
- dataset:
config: fr
name: MTEB STSBenchmarkMultilingualSTS (fr)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics:
- type: cosine_pearson
value: 81.23994381619546
- type: cosine_spearman
value: 81.55923116292537
- type: euclidean_pearson
value: 79.95507984767936
- type: euclidean_spearman
value: 81.55780186152964
- type: main_score
value: 81.55923116292537
- type: manhattan_pearson
value: 79.85599761287939
- type: manhattan_spearman
value: 81.47864706229939
- type: pearson
value: 81.23994381619546
- type: spearman
value: 81.55923116292537
task:
type: STS
- dataset:
config: default
name: MTEB SummEvalFr
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
split: test
type: lyon-nlp/summarization-summeval-fr-p2p
metrics:
- type: cosine_pearson
value: 32.15173983476866
- type: cosine_spearman
value: 30.52126378106083
- type: dot_pearson
value: 32.15174076737564
- type: dot_spearman
value: 30.5195596882719
- type: main_score
value: 30.52126378106083
- type: pearson
value: 32.15173983476866
- type: spearman
value: 30.52126378106083
task:
type: Summarization
- dataset:
config: default
name: MTEB SyntecReranking
revision: daf0863838cd9e3ba50544cdce3ac2b338a1b0ad
split: test
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
metrics:
- type: main_score
value: 87.26666666666667
- type: map
value: 87.26666666666667
- type: mrr
value: 87.26666666666667
- type: nAUC_map_diff1
value: 61.78899094665834
- type: nAUC_map_max
value: -2.2012304949668993
- type: nAUC_map_std
value: 37.30593860183502
- type: nAUC_mrr_diff1
value: 61.78899094665834
- type: nAUC_mrr_max
value: -2.2012304949668993
- type: nAUC_mrr_std
value: 37.30593860183502
task:
type: Reranking
- dataset:
config: default
name: MTEB SyntecRetrieval
revision: 19661ccdca4dfc2d15122d776b61685f48c68ca9
split: test
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
metrics:
- type: main_score
value: 82.43599999999999
- type: map_at_1
value: 64.0
- type: map_at_10
value: 76.996
- type: map_at_100
value: 77.013
- type: map_at_1000
value: 77.013
- type: map_at_20
value: 76.996
- type: map_at_3
value: 75.333
- type: map_at_5
value: 76.283
- type: mrr_at_1
value: 64.0
- type: mrr_at_10
value: 76.99563492063493
- type: mrr_at_100
value: 77.01349206349207
- type: mrr_at_1000
value: 77.01349206349207
- type: mrr_at_20
value: 76.99563492063493
- type: mrr_at_3
value: 75.33333333333334
- type: mrr_at_5
value: 76.28333333333335
- type: nauc_map_at_1000_diff1
value: 52.30753137123808
- type: nauc_map_at_1000_max
value: 17.29347799374363
- type: nauc_map_at_1000_std
value: -24.365180584916605
- type: nauc_map_at_100_diff1
value: 52.30753137123808
- type: nauc_map_at_100_max
value: 17.29347799374363
- type: nauc_map_at_100_std
value: -24.365180584916605
- type: nauc_map_at_10_diff1
value: 52.32585614998896
- type: nauc_map_at_10_max
value: 17.261799514404697
- type: nauc_map_at_10_std
value: -24.30981171513401
- type: nauc_map_at_1_diff1
value: 56.0129007536084
- type: nauc_map_at_1_max
value: 18.50970749776472
- type: nauc_map_at_1_std
value: -25.554029888874723
- type: nauc_map_at_20_diff1
value: 52.32585614998896
- type: nauc_map_at_20_max
value: 17.261799514404697
- type: nauc_map_at_20_std
value: -24.30981171513401
- type: nauc_map_at_3_diff1
value: 51.22942949153543
- type: nauc_map_at_3_max
value: 15.992554731586273
- type: nauc_map_at_3_std
value: -25.091588619375383
- type: nauc_map_at_5_diff1
value: 51.96750082957349
- type: nauc_map_at_5_max
value: 17.158674012807587
- type: nauc_map_at_5_std
value: -23.657966651531893
- type: nauc_mrr_at_1000_diff1
value: 52.30753137123808
- type: nauc_mrr_at_1000_max
value: 17.29347799374363
- type: nauc_mrr_at_1000_std
value: -24.365180584916605
- type: nauc_mrr_at_100_diff1
value: 52.30753137123808
- type: nauc_mrr_at_100_max
value: 17.29347799374363
- type: nauc_mrr_at_100_std
value: -24.365180584916605
- type: nauc_mrr_at_10_diff1
value: 52.32585614998896
- type: nauc_mrr_at_10_max
value: 17.261799514404697
- type: nauc_mrr_at_10_std
value: -24.30981171513401
- type: nauc_mrr_at_1_diff1
value: 56.0129007536084
- type: nauc_mrr_at_1_max
value: 18.50970749776472
- type: nauc_mrr_at_1_std
value: -25.554029888874723
- type: nauc_mrr_at_20_diff1
value: 52.32585614998896
- type: nauc_mrr_at_20_max
value: 17.261799514404697
- type: nauc_mrr_at_20_std
value: -24.30981171513401
- type: nauc_mrr_at_3_diff1
value: 51.22942949153543
- type: nauc_mrr_at_3_max
value: 15.992554731586273
- type: nauc_mrr_at_3_std
value: -25.091588619375383
- type: nauc_mrr_at_5_diff1
value: 51.96750082957349
- type: nauc_mrr_at_5_max
value: 17.158674012807587
- type: nauc_mrr_at_5_std
value: -23.657966651531893
- type: nauc_ndcg_at_1000_diff1
value: 52.25936013546259
- type: nauc_ndcg_at_1000_max
value: 17.156377900614427
- type: nauc_ndcg_at_1000_std
value: -23.860918956976775
- type: nauc_ndcg_at_100_diff1
value: 52.25936013546259
- type: nauc_ndcg_at_100_max
value: 17.156377900614427
- type: nauc_ndcg_at_100_std
value: -23.860918956976775
- type: nauc_ndcg_at_10_diff1
value: 52.48908784081352
- type: nauc_ndcg_at_10_max
value: 16.761778191196626
- type: nauc_ndcg_at_10_std
value: -23.1742676723163
- type: nauc_ndcg_at_1_diff1
value: 56.0129007536084
- type: nauc_ndcg_at_1_max
value: 18.50970749776472
- type: nauc_ndcg_at_1_std
value: -25.554029888874723
- type: nauc_ndcg_at_20_diff1
value: 52.48908784081352
- type: nauc_ndcg_at_20_max
value: 16.761778191196626
- type: nauc_ndcg_at_20_std
value: -23.1742676723163
- type: nauc_ndcg_at_3_diff1
value: 50.39571507644849
- type: nauc_ndcg_at_3_max
value: 14.796226924105916
- type: nauc_ndcg_at_3_std
value: -24.55184971150951
- type: nauc_ndcg_at_5_diff1
value: 51.764690566839796
- type: nauc_ndcg_at_5_max
value: 17.064884477394884
- type: nauc_ndcg_at_5_std
value: -21.11624960412319
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_100_diff1
value: .nan
- type: nauc_precision_at_100_max
value: .nan
- type: nauc_precision_at_100_std
value: .nan
- type: nauc_precision_at_10_diff1
value: 72.22222222222277
- type: nauc_precision_at_10_max
value: -17.133520074696808
- type: nauc_precision_at_10_std
value: 35.80765639589114
- type: nauc_precision_at_1_diff1
value: 56.0129007536084
- type: nauc_precision_at_1_max
value: 18.50970749776472
- type: nauc_precision_at_1_std
value: -25.554029888874723
- type: nauc_precision_at_20_diff1
value: 72.22222222222277
- type: nauc_precision_at_20_max
value: -17.133520074696808
- type: nauc_precision_at_20_std
value: 35.80765639589114
- type: nauc_precision_at_3_diff1
value: 46.23716153127904
- type: nauc_precision_at_3_max
value: 7.563025210083932
- type: nauc_precision_at_3_std
value: -21.092436974790093
- type: nauc_precision_at_5_diff1
value: 51.618425147836945
- type: nauc_precision_at_5_max
value: 16.923436041083008
- type: nauc_precision_at_5_std
value: 5.765639589169112
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_100_diff1
value: .nan
- type: nauc_recall_at_100_max
value: .nan
- type: nauc_recall_at_100_std
value: .nan
- type: nauc_recall_at_10_diff1
value: 72.22222222222202
- type: nauc_recall_at_10_max
value: -17.133520074696147
- type: nauc_recall_at_10_std
value: 35.80765639589109
- type: nauc_recall_at_1_diff1
value: 56.0129007536084
- type: nauc_recall_at_1_max
value: 18.50970749776472
- type: nauc_recall_at_1_std
value: -25.554029888874723
- type: nauc_recall_at_20_diff1
value: 72.22222222222202
- type: nauc_recall_at_20_max
value: -17.133520074696147
- type: nauc_recall_at_20_std
value: 35.80765639589109
- type: nauc_recall_at_3_diff1
value: 46.23716153127918
- type: nauc_recall_at_3_max
value: 7.563025210084062
- type: nauc_recall_at_3_std
value: -21.092436974789898
- type: nauc_recall_at_5_diff1
value: 51.618425147837044
- type: nauc_recall_at_5_max
value: 16.923436041083242
- type: nauc_recall_at_5_std
value: 5.765639589169263
- type: ndcg_at_1
value: 64.0
- type: ndcg_at_10
value: 82.43599999999999
- type: ndcg_at_100
value: 82.607
- type: ndcg_at_1000
value: 82.607
- type: ndcg_at_20
value: 82.43599999999999
- type: ndcg_at_3
value: 79.095
- type: ndcg_at_5
value: 80.774
- type: precision_at_1
value: 64.0
- type: precision_at_10
value: 9.9
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.95
- type: precision_at_3
value: 30.0
- type: precision_at_5
value: 18.8
- type: recall_at_1
value: 64.0
- type: recall_at_10
value: 99.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 99.0
- type: recall_at_3
value: 90.0
- type: recall_at_5
value: 94.0
task:
type: Retrieval
- dataset:
config: fra-fra
name: MTEB XPQARetrieval (fr)
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
split: test
type: jinaai/xpqa
metrics:
- type: main_score
value: 62.190999999999995
- type: map_at_1
value: 35.412
- type: map_at_10
value: 55.372
- type: map_at_100
value: 56.835
- type: map_at_1000
value: 56.913000000000004
- type: map_at_20
value: 56.221
- type: map_at_3
value: 48.903
- type: map_at_5
value: 53.238
- type: mrr_at_1
value: 57.009345794392516
- type: mrr_at_10
value: 64.78569309343675
- type: mrr_at_100
value: 65.37729406210731
- type: mrr_at_1000
value: 65.40232200760255
- type: mrr_at_20
value: 65.19512187170714
- type: mrr_at_3
value: 62.305295950155745
- type: mrr_at_5
value: 63.97418780596347
- type: nauc_map_at_1000_diff1
value: 51.293436542919736
- type: nauc_map_at_1000_max
value: 52.70558085897355
- type: nauc_map_at_1000_std
value: 4.042307291430875
- type: nauc_map_at_100_diff1
value: 51.26892284346969
- type: nauc_map_at_100_max
value: 52.68013316306771
- type: nauc_map_at_100_std
value: 4.026915747351222
- type: nauc_map_at_10_diff1
value: 50.8543852249949
- type: nauc_map_at_10_max
value: 52.15208348725869
- type: nauc_map_at_10_std
value: 3.6915190933761437
- type: nauc_map_at_1_diff1
value: 59.961175322517725
- type: nauc_map_at_1_max
value: 37.84020048887668
- type: nauc_map_at_1_std
value: -1.716395538164829
- type: nauc_map_at_20_diff1
value: 51.07739560575918
- type: nauc_map_at_20_max
value: 52.37861214759321
- type: nauc_map_at_20_std
value: 3.6707917482294397
- type: nauc_map_at_3_diff1
value: 52.519227595940954
- type: nauc_map_at_3_max
value: 48.64938894035591
- type: nauc_map_at_3_std
value: 2.670992373225412
- type: nauc_map_at_5_diff1
value: 51.66705458189757
- type: nauc_map_at_5_max
value: 51.74913250220439
- type: nauc_map_at_5_std
value: 3.987564394588077
- type: nauc_mrr_at_1000_diff1
value: 58.90292049458316
- type: nauc_mrr_at_1000_max
value: 59.02377527770008
- type: nauc_mrr_at_1000_std
value: 6.15239522914937
- type: nauc_mrr_at_100_diff1
value: 58.88627703402866
- type: nauc_mrr_at_100_max
value: 59.01733085707039
- type: nauc_mrr_at_100_std
value: 6.149383764160973
- type: nauc_mrr_at_10_diff1
value: 58.787561655079315
- type: nauc_mrr_at_10_max
value: 58.883901063919616
- type: nauc_mrr_at_10_std
value: 5.955816839989
- type: nauc_mrr_at_1_diff1
value: 61.493169979051274
- type: nauc_mrr_at_1_max
value: 60.26766809318437
- type: nauc_mrr_at_1_std
value: 7.9345773661140555
- type: nauc_mrr_at_20_diff1
value: 58.88172676495632
- type: nauc_mrr_at_20_max
value: 59.01063084619932
- type: nauc_mrr_at_20_std
value: 5.999917023489485
- type: nauc_mrr_at_3_diff1
value: 59.328585273714765
- type: nauc_mrr_at_3_max
value: 59.138843933099984
- type: nauc_mrr_at_3_std
value: 5.867564048529799
- type: nauc_mrr_at_5_diff1
value: 59.01605585266293
- type: nauc_mrr_at_5_max
value: 59.35576576264414
- type: nauc_mrr_at_5_std
value: 6.4159398933971294
- type: nauc_ndcg_at_1000_diff1
value: 52.72831771372173
- type: nauc_ndcg_at_1000_max
value: 55.00758519121888
- type: nauc_ndcg_at_1000_std
value: 4.985669533881848
- type: nauc_ndcg_at_100_diff1
value: 52.108377732208176
- type: nauc_ndcg_at_100_max
value: 54.48165097844046
- type: nauc_ndcg_at_100_std
value: 4.90669931060551
- type: nauc_ndcg_at_10_diff1
value: 50.664291148529664
- type: nauc_ndcg_at_10_max
value: 52.99267789451465
- type: nauc_ndcg_at_10_std
value: 3.2476865951979432
- type: nauc_ndcg_at_1_diff1
value: 61.493169979051274
- type: nauc_ndcg_at_1_max
value: 60.26766809318437
- type: nauc_ndcg_at_1_std
value: 7.9345773661140555
- type: nauc_ndcg_at_20_diff1
value: 51.18525105808147
- type: nauc_ndcg_at_20_max
value: 53.43688504608144
- type: nauc_ndcg_at_20_std
value: 3.0898823820531667
- type: nauc_ndcg_at_3_diff1
value: 51.86574900383314
- type: nauc_ndcg_at_3_max
value: 54.590246592806615
- type: nauc_ndcg_at_3_std
value: 4.145862812422975
- type: nauc_ndcg_at_5_diff1
value: 52.02045236842261
- type: nauc_ndcg_at_5_max
value: 53.32018698876075
- type: nauc_ndcg_at_5_std
value: 4.253069053649545
- type: nauc_precision_at_1000_diff1
value: -15.302260566955942
- type: nauc_precision_at_1000_max
value: 12.78016543871415
- type: nauc_precision_at_1000_std
value: 9.650613541206308
- type: nauc_precision_at_100_diff1
value: -11.169900642295536
- type: nauc_precision_at_100_max
value: 17.997775654873607
- type: nauc_precision_at_100_std
value: 10.335855037587864
- type: nauc_precision_at_10_diff1
value: -0.7223213004392349
- type: nauc_precision_at_10_max
value: 30.1027627113279
- type: nauc_precision_at_10_std
value: 8.226673861581954
- type: nauc_precision_at_1_diff1
value: 61.493169979051274
- type: nauc_precision_at_1_max
value: 60.26766809318437
- type: nauc_precision_at_1_std
value: 7.9345773661140555
- type: nauc_precision_at_20_diff1
value: -4.815929448858574
- type: nauc_precision_at_20_max
value: 25.356128631092655
- type: nauc_precision_at_20_std
value: 7.647974758815793
- type: nauc_precision_at_3_diff1
value: 14.618447863791332
- type: nauc_precision_at_3_max
value: 42.347601836456704
- type: nauc_precision_at_3_std
value: 9.351508502457152
- type: nauc_precision_at_5_diff1
value: 6.989536536316584
- type: nauc_precision_at_5_max
value: 37.43282182319603
- type: nauc_precision_at_5_std
value: 10.294650747748632
- type: nauc_recall_at_1000_diff1
value: 66.00655448172738
- type: nauc_recall_at_1000_max
value: 71.84347765996883
- type: nauc_recall_at_1000_std
value: 50.90067212878784
- type: nauc_recall_at_100_diff1
value: 36.14296627142933
- type: nauc_recall_at_100_max
value: 41.197429505920766
- type: nauc_recall_at_100_std
value: 7.431041060310201
- type: nauc_recall_at_10_diff1
value: 37.65270595753883
- type: nauc_recall_at_10_max
value: 41.691362683452276
- type: nauc_recall_at_10_std
value: -2.3254949626448083
- type: nauc_recall_at_1_diff1
value: 59.961175322517725
- type: nauc_recall_at_1_max
value: 37.84020048887668
- type: nauc_recall_at_1_std
value: -1.716395538164829
- type: nauc_recall_at_20_diff1
value: 36.92285554147242
- type: nauc_recall_at_20_max
value: 40.480804692339525
- type: nauc_recall_at_20_std
value: -4.660293872779451
- type: nauc_recall_at_3_diff1
value: 47.84172346809966
- type: nauc_recall_at_3_max
value: 45.05790681661395
- type: nauc_recall_at_3_std
value: 0.48589911004729147
- type: nauc_recall_at_5_diff1
value: 43.57123230477339
- type: nauc_recall_at_5_max
value: 45.95815692338621
- type: nauc_recall_at_5_std
value: 2.026516305217224
- type: ndcg_at_1
value: 57.009
- type: ndcg_at_10
value: 62.190999999999995
- type: ndcg_at_100
value: 67.174
- type: ndcg_at_1000
value: 68.446
- type: ndcg_at_20
value: 64.348
- type: ndcg_at_3
value: 56.233999999999995
- type: ndcg_at_5
value: 58.709999999999994
- type: precision_at_1
value: 57.009
- type: precision_at_10
value: 14.673
- type: precision_at_100
value: 1.8950000000000002
- type: precision_at_1000
value: 0.20600000000000002
- type: precision_at_20
value: 8.091
- type: precision_at_3
value: 34.624
- type: precision_at_5
value: 25.394
- type: recall_at_1
value: 35.412
- type: recall_at_10
value: 72.214
- type: recall_at_100
value: 91.415
- type: recall_at_1000
value: 99.533
- type: recall_at_20
value: 79.103
- type: recall_at_3
value: 53.529
- type: recall_at_5
value: 63.62
task:
type: Retrieval
- dataset:
config: eng-fra
name: MTEB XPQARetrieval (eng-fra)
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
split: test
type: jinaai/xpqa
metrics:
- type: main_score
value: 31.380000000000003
- type: map_at_1
value: 11.257
- type: map_at_10
value: 24.596
- type: map_at_100
value: 27.267000000000003
- type: map_at_1000
value: 27.412999999999997
- type: map_at_20
value: 26.107999999999997
- type: map_at_3
value: 19.236
- type: map_at_5
value: 22.076999999999998
- type: mrr_at_1
value: 23.76502002670227
- type: mrr_at_10
value: 32.646120753597366
- type: mrr_at_100
value: 34.021717341570096
- type: mrr_at_1000
value: 34.08123584522526
- type: mrr_at_20
value: 33.488454614873945
- type: mrr_at_3
value: 29.439252336448607
- type: mrr_at_5
value: 30.97463284379172
- type: nauc_map_at_1000_diff1
value: 23.090590573188127
- type: nauc_map_at_1000_max
value: 37.736493247159515
- type: nauc_map_at_1000_std
value: 10.98069893040178
- type: nauc_map_at_100_diff1
value: 23.08559086307178
- type: nauc_map_at_100_max
value: 37.72263314123226
- type: nauc_map_at_100_std
value: 11.042922887319614
- type: nauc_map_at_10_diff1
value: 22.919253103936867
- type: nauc_map_at_10_max
value: 37.11680228717991
- type: nauc_map_at_10_std
value: 9.851990888901907
- type: nauc_map_at_1_diff1
value: 26.479314334323384
- type: nauc_map_at_1_max
value: 24.606099049654016
- type: nauc_map_at_1_std
value: 7.368843855661875
- type: nauc_map_at_20_diff1
value: 22.84865788594623
- type: nauc_map_at_20_max
value: 37.35013174420624
- type: nauc_map_at_20_std
value: 10.38206527259999
- type: nauc_map_at_3_diff1
value: 24.422040907804902
- type: nauc_map_at_3_max
value: 34.1407580102983
- type: nauc_map_at_3_std
value: 6.90072751192396
- type: nauc_map_at_5_diff1
value: 23.679285267333217
- type: nauc_map_at_5_max
value: 36.69505551539262
- type: nauc_map_at_5_std
value: 9.071400025204603
- type: nauc_mrr_at_1000_diff1
value: 23.91122464190796
- type: nauc_mrr_at_1000_max
value: 38.00739859980611
- type: nauc_mrr_at_1000_std
value: 12.603177305247423
- type: nauc_mrr_at_100_diff1
value: 23.926489219810712
- type: nauc_mrr_at_100_max
value: 38.01653317102498
- type: nauc_mrr_at_100_std
value: 12.631657383704397
- type: nauc_mrr_at_10_diff1
value: 23.793536028816924
- type: nauc_mrr_at_10_max
value: 37.731699667898546
- type: nauc_mrr_at_10_std
value: 12.519721615734111
- type: nauc_mrr_at_1_diff1
value: 26.560927789365497
- type: nauc_mrr_at_1_max
value: 39.34339331908778
- type: nauc_mrr_at_1_std
value: 11.755625469925857
- type: nauc_mrr_at_20_diff1
value: 23.785050335795756
- type: nauc_mrr_at_20_max
value: 37.70507807708539
- type: nauc_mrr_at_20_std
value: 12.401310290425641
- type: nauc_mrr_at_3_diff1
value: 24.760339690704274
- type: nauc_mrr_at_3_max
value: 38.97081556411779
- type: nauc_mrr_at_3_std
value: 12.403416856601224
- type: nauc_mrr_at_5_diff1
value: 24.16786185395756
- type: nauc_mrr_at_5_max
value: 38.675901959087064
- type: nauc_mrr_at_5_std
value: 12.328016386544244
- type: nauc_ndcg_at_1000_diff1
value: 22.575525759807498
- type: nauc_ndcg_at_1000_max
value: 38.08756303764784
- type: nauc_ndcg_at_1000_std
value: 12.993082901884351
- type: nauc_ndcg_at_100_diff1
value: 22.84247295232495
- type: nauc_ndcg_at_100_max
value: 38.07376875349487
- type: nauc_ndcg_at_100_std
value: 14.670272841790322
- type: nauc_ndcg_at_10_diff1
value: 21.851855665665028
- type: nauc_ndcg_at_10_max
value: 36.30808033173574
- type: nauc_ndcg_at_10_std
value: 10.754345146682587
- type: nauc_ndcg_at_1_diff1
value: 26.560927789365497
- type: nauc_ndcg_at_1_max
value: 39.34339331908778
- type: nauc_ndcg_at_1_std
value: 11.755625469925857
- type: nauc_ndcg_at_20_diff1
value: 21.85222563105362
- type: nauc_ndcg_at_20_max
value: 36.49693582912162
- type: nauc_ndcg_at_20_std
value: 11.462407172413222
- type: nauc_ndcg_at_3_diff1
value: 23.835148821074096
- type: nauc_ndcg_at_3_max
value: 37.21286292761239
- type: nauc_ndcg_at_3_std
value: 8.965675045214653
- type: nauc_ndcg_at_5_diff1
value: 22.94941035043304
- type: nauc_ndcg_at_5_max
value: 37.116308712473725
- type: nauc_ndcg_at_5_std
value: 9.96746473363745
- type: nauc_precision_at_1000_diff1
value: 4.391641883500156
- type: nauc_precision_at_1000_max
value: 22.960724719570653
- type: nauc_precision_at_1000_std
value: 9.90771833324347
- type: nauc_precision_at_100_diff1
value: 9.398103008957907
- type: nauc_precision_at_100_max
value: 29.966107038070213
- type: nauc_precision_at_100_std
value: 18.246515814298206
- type: nauc_precision_at_10_diff1
value: 14.642013509002073
- type: nauc_precision_at_10_max
value: 39.865916483254914
- type: nauc_precision_at_10_std
value: 16.389751433271922
- type: nauc_precision_at_1_diff1
value: 26.560927789365497
- type: nauc_precision_at_1_max
value: 39.34339331908778
- type: nauc_precision_at_1_std
value: 11.755625469925857
- type: nauc_precision_at_20_diff1
value: 12.328250607495741
- type: nauc_precision_at_20_max
value: 36.609492322958076
- type: nauc_precision_at_20_std
value: 16.186393097514785
- type: nauc_precision_at_3_diff1
value: 21.43869193024236
- type: nauc_precision_at_3_max
value: 44.92920554318338
- type: nauc_precision_at_3_std
value: 12.93524236487951
- type: nauc_precision_at_5_diff1
value: 17.980792540844075
- type: nauc_precision_at_5_max
value: 44.67180132719046
- type: nauc_precision_at_5_std
value: 15.44379773164089
- type: nauc_recall_at_1000_diff1
value: -18.599562189867928
- type: nauc_recall_at_1000_max
value: -1.233438302856996
- type: nauc_recall_at_1000_std
value: 60.504773500458754
- type: nauc_recall_at_100_diff1
value: 21.73131824226728
- type: nauc_recall_at_100_max
value: 33.813071564297644
- type: nauc_recall_at_100_std
value: 31.938349559054004
- type: nauc_recall_at_10_diff1
value: 17.11887766943705
- type: nauc_recall_at_10_max
value: 28.89674920890047
- type: nauc_recall_at_10_std
value: 7.773984628905876
- type: nauc_recall_at_1_diff1
value: 26.479314334323384
- type: nauc_recall_at_1_max
value: 24.606099049654016
- type: nauc_recall_at_1_std
value: 7.368843855661875
- type: nauc_recall_at_20_diff1
value: 17.295953047798886
- type: nauc_recall_at_20_max
value: 28.434654095893304
- type: nauc_recall_at_20_std
value: 9.427920198911856
- type: nauc_recall_at_3_diff1
value: 21.272960191663262
- type: nauc_recall_at_3_max
value: 30.445386445037144
- type: nauc_recall_at_3_std
value: 4.74984017701616
- type: nauc_recall_at_5_diff1
value: 19.423326866459472
- type: nauc_recall_at_5_max
value: 32.51726362019113
- type: nauc_recall_at_5_std
value: 7.7878756846006185
- type: ndcg_at_1
value: 23.765
- type: ndcg_at_10
value: 31.380000000000003
- type: ndcg_at_100
value: 41.426
- type: ndcg_at_1000
value: 44.168
- type: ndcg_at_20
value: 35.449000000000005
- type: ndcg_at_3
value: 24.845
- type: ndcg_at_5
value: 26.705000000000002
- type: precision_at_1
value: 23.765
- type: precision_at_10
value: 9.879999999999999
- type: precision_at_100
value: 1.865
- type: precision_at_1000
value: 0.22300000000000003
- type: precision_at_20
value: 6.449000000000001
- type: precision_at_3
value: 18.024
- type: precision_at_5
value: 14.472999999999999
- type: recall_at_1
value: 11.257
- type: recall_at_10
value: 42.345
- type: recall_at_100
value: 81.159
- type: recall_at_1000
value: 99.29
- type: recall_at_20
value: 54.989
- type: recall_at_3
value: 23.687
- type: recall_at_5
value: 30.823
task:
type: Retrieval
- dataset:
config: fra-eng
name: MTEB XPQARetrieval (fra-eng)
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
split: test
type: jinaai/xpqa
metrics:
- type: main_score
value: 56.635999999999996
- type: map_at_1
value: 31.4
- type: map_at_10
value: 50.056
- type: map_at_100
value: 51.663000000000004
- type: map_at_1000
value: 51.761
- type: map_at_20
value: 50.927
- type: map_at_3
value: 44.529999999999994
- type: map_at_5
value: 47.894
- type: mrr_at_1
value: 50.467289719626166
- type: mrr_at_10
value: 58.950823319982185
- type: mrr_at_100
value: 59.70354953666045
- type: mrr_at_1000
value: 59.734711425279755
- type: mrr_at_20
value: 59.40583228190128
- type: mrr_at_3
value: 56.875834445927886
- type: mrr_at_5
value: 58.21762349799728
- type: nauc_map_at_1000_diff1
value: 48.15648920144338
- type: nauc_map_at_1000_max
value: 46.702778511311514
- type: nauc_map_at_1000_std
value: -2.8986084054302346
- type: nauc_map_at_100_diff1
value: 48.07320124865117
- type: nauc_map_at_100_max
value: 46.66060865870994
- type: nauc_map_at_100_std
value: -2.898261800096327
- type: nauc_map_at_10_diff1
value: 48.02406723579077
- type: nauc_map_at_10_max
value: 46.41839190788124
- type: nauc_map_at_10_std
value: -3.2566313465012535
- type: nauc_map_at_1_diff1
value: 54.13992707642448
- type: nauc_map_at_1_max
value: 34.04660478197247
- type: nauc_map_at_1_std
value: -4.558752037228464
- type: nauc_map_at_20_diff1
value: 48.046199789059344
- type: nauc_map_at_20_max
value: 46.720705370675915
- type: nauc_map_at_20_std
value: -3.033997271677673
- type: nauc_map_at_3_diff1
value: 50.009783024030185
- type: nauc_map_at_3_max
value: 42.35942421403899
- type: nauc_map_at_3_std
value: -5.2762823138538515
- type: nauc_map_at_5_diff1
value: 48.8354268056224
- type: nauc_map_at_5_max
value: 45.655213495860814
- type: nauc_map_at_5_std
value: -3.7884263147862267
- type: nauc_mrr_at_1000_diff1
value: 53.36845252957243
- type: nauc_mrr_at_1000_max
value: 51.36922708038703
- type: nauc_mrr_at_1000_std
value: -1.4510764030641954
- type: nauc_mrr_at_100_diff1
value: 53.3537222476053
- type: nauc_mrr_at_100_max
value: 51.38049608859829
- type: nauc_mrr_at_100_std
value: -1.4191780664448506
- type: nauc_mrr_at_10_diff1
value: 53.305802521069
- type: nauc_mrr_at_10_max
value: 51.21960893720018
- type: nauc_mrr_at_10_std
value: -1.6724093244930498
- type: nauc_mrr_at_1_diff1
value: 55.70120557955961
- type: nauc_mrr_at_1_max
value: 53.01658211876319
- type: nauc_mrr_at_1_std
value: -0.6423359202704497
- type: nauc_mrr_at_20_diff1
value: 53.34768541161141
- type: nauc_mrr_at_20_max
value: 51.352620113317805
- type: nauc_mrr_at_20_std
value: -1.5006800933364013
- type: nauc_mrr_at_3_diff1
value: 53.39969881700113
- type: nauc_mrr_at_3_max
value: 50.89022404206973
- type: nauc_mrr_at_3_std
value: -3.1275962557855412
- type: nauc_mrr_at_5_diff1
value: 53.6906061507349
- type: nauc_mrr_at_5_max
value: 51.45261103925232
- type: nauc_mrr_at_5_std
value: -1.7795696130396883
- type: nauc_ndcg_at_1000_diff1
value: 48.95637773496826
- type: nauc_ndcg_at_1000_max
value: 48.197622067566826
- type: nauc_ndcg_at_1000_std
value: -1.4607313404789106
- type: nauc_ndcg_at_100_diff1
value: 47.71577524982021
- type: nauc_ndcg_at_100_max
value: 47.883023532341504
- type: nauc_ndcg_at_100_std
value: -0.6132109059243465
- type: nauc_ndcg_at_10_diff1
value: 47.5329600424363
- type: nauc_ndcg_at_10_max
value: 47.498459285878575
- type: nauc_ndcg_at_10_std
value: -2.330121342823272
- type: nauc_ndcg_at_1_diff1
value: 55.70120557955961
- type: nauc_ndcg_at_1_max
value: 53.01658211876319
- type: nauc_ndcg_at_1_std
value: -0.6423359202704497
- type: nauc_ndcg_at_20_diff1
value: 47.6173989193167
- type: nauc_ndcg_at_20_max
value: 48.19865615901621
- type: nauc_ndcg_at_20_std
value: -1.6128175051145877
- type: nauc_ndcg_at_3_diff1
value: 48.78930092666264
- type: nauc_ndcg_at_3_max
value: 46.4431323615495
- type: nauc_ndcg_at_3_std
value: -5.431496363976204
- type: nauc_ndcg_at_5_diff1
value: 49.11424543999915
- type: nauc_ndcg_at_5_max
value: 47.05648749366126
- type: nauc_ndcg_at_5_std
value: -3.330885962532834
- type: nauc_precision_at_1000_diff1
value: -10.880765837183755
- type: nauc_precision_at_1000_max
value: 8.572817422349692
- type: nauc_precision_at_1000_std
value: 4.766982235965037
- type: nauc_precision_at_100_diff1
value: -8.679642859295267
- type: nauc_precision_at_100_max
value: 13.715180395886897
- type: nauc_precision_at_100_std
value: 6.946301090207475
- type: nauc_precision_at_10_diff1
value: 4.944045819175594
- type: nauc_precision_at_10_max
value: 30.760105361109925
- type: nauc_precision_at_10_std
value: 3.6068920141401626
- type: nauc_precision_at_1_diff1
value: 55.70120557955961
- type: nauc_precision_at_1_max
value: 53.01658211876319
- type: nauc_precision_at_1_std
value: -0.6423359202704497
- type: nauc_precision_at_20_diff1
value: 0.8043591939583385
- type: nauc_precision_at_20_max
value: 26.360434462685422
- type: nauc_precision_at_20_std
value: 4.739891658844582
- type: nauc_precision_at_3_diff1
value: 19.013124811719553
- type: nauc_precision_at_3_max
value: 38.42804762790048
- type: nauc_precision_at_3_std
value: -1.4085959010900053
- type: nauc_precision_at_5_diff1
value: 12.360123599205414
- type: nauc_precision_at_5_max
value: 37.08361417845578
- type: nauc_precision_at_5_std
value: 1.9104788050916797
- type: nauc_recall_at_1000_diff1
value: 64.46395887603528
- type: nauc_recall_at_1000_max
value: 25.40689664838346
- type: nauc_recall_at_1000_std
value: 64.91673770650863
- type: nauc_recall_at_100_diff1
value: 23.04629413894431
- type: nauc_recall_at_100_max
value: 37.70267898773106
- type: nauc_recall_at_100_std
value: 19.483375935785805
- type: nauc_recall_at_10_diff1
value: 37.89470563650895
- type: nauc_recall_at_10_max
value: 41.88446616509962
- type: nauc_recall_at_10_std
value: -0.5968285599827128
- type: nauc_recall_at_1_diff1
value: 54.13992707642448
- type: nauc_recall_at_1_max
value: 34.04660478197247
- type: nauc_recall_at_1_std
value: -4.558752037228464
- type: nauc_recall_at_20_diff1
value: 36.41725409411871
- type: nauc_recall_at_20_max
value: 43.570833102022796
- type: nauc_recall_at_20_std
value: 2.4475141353956724
- type: nauc_recall_at_3_diff1
value: 44.46469511434876
- type: nauc_recall_at_3_max
value: 36.60941837529587
- type: nauc_recall_at_3_std
value: -8.466344004251715
- type: nauc_recall_at_5_diff1
value: 43.140961160644444
- type: nauc_recall_at_5_max
value: 42.12923427424881
- type: nauc_recall_at_5_std
value: -3.2514274060186428
- type: ndcg_at_1
value: 50.467
- type: ndcg_at_10
value: 56.635999999999996
- type: ndcg_at_100
value: 62.575
- type: ndcg_at_1000
value: 64.153
- type: ndcg_at_20
value: 58.909
- type: ndcg_at_3
value: 51.636
- type: ndcg_at_5
value: 53.252
- type: precision_at_1
value: 50.467
- type: precision_at_10
value: 13.458
- type: precision_at_100
value: 1.8530000000000002
- type: precision_at_1000
value: 0.20600000000000002
- type: precision_at_20
value: 7.582999999999999
- type: precision_at_3
value: 31.865
- type: precision_at_5
value: 22.884
- type: recall_at_1
value: 31.4
- type: recall_at_10
value: 66.19
- type: recall_at_100
value: 89.577
- type: recall_at_1000
value: 99.695
- type: recall_at_20
value: 73.213
- type: recall_at_3
value: 50.699000000000005
- type: recall_at_5
value: 58.158
task:
type: Retrieval
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 49.22465208747514
- type: f1
value: 35.68158330115517
- type: f1_weighted
value: 44.81425765760541
- type: main_score
value: 49.22465208747514
task:
type: Classification
- dataset:
config: default
name: MTEB ArguAna-PL
revision: 63fc86750af76253e8c760fc9e534bbf24d260a2
split: test
type: clarin-knext/arguana-pl
metrics:
- type: main_score
value: 49.668
- type: map_at_1
value: 24.751
- type: map_at_10
value: 40.36
- type: map_at_100
value: 41.368
- type: map_at_1000
value: 41.379
- type: map_at_20
value: 41.134
- type: map_at_3
value: 34.945
- type: map_at_5
value: 38.043
- type: mrr_at_1
value: 25.03556187766714
- type: mrr_at_10
value: 40.47856126803494
- type: mrr_at_100
value: 41.49280025917654
- type: mrr_at_1000
value: 41.50319481040459
- type: mrr_at_20
value: 41.25788030596975
- type: mrr_at_3
value: 35.0521574205784
- type: mrr_at_5
value: 38.167377904219954
- type: nauc_map_at_1000_diff1
value: 7.731653729111241
- type: nauc_map_at_1000_max
value: -6.3011371446014115
- type: nauc_map_at_1000_std
value: -6.06100995003556
- type: nauc_map_at_100_diff1
value: 7.740664698795466
- type: nauc_map_at_100_max
value: -6.278576653918305
- type: nauc_map_at_100_std
value: -6.048854855804748
- type: nauc_map_at_10_diff1
value: 7.58994360921921
- type: nauc_map_at_10_max
value: -6.486918896565689
- type: nauc_map_at_10_std
value: -6.590603504257126
- type: nauc_map_at_1_diff1
value: 10.018749983163797
- type: nauc_map_at_1_max
value: -9.286741407015537
- type: nauc_map_at_1_std
value: -6.604729499204554
- type: nauc_map_at_20_diff1
value: 7.706256252764164
- type: nauc_map_at_20_max
value: -6.168914547814974
- type: nauc_map_at_20_std
value: -6.083566639755691
- type: nauc_map_at_3_diff1
value: 7.033893231381659
- type: nauc_map_at_3_max
value: -6.945660103296161
- type: nauc_map_at_3_std
value: -6.0565345896842135
- type: nauc_map_at_5_diff1
value: 7.205099657249722
- type: nauc_map_at_5_max
value: -6.776921990255051
- type: nauc_map_at_5_std
value: -5.907533989245036
- type: nauc_mrr_at_1000_diff1
value: 6.668270267618491
- type: nauc_mrr_at_1000_max
value: -6.803645974646868
- type: nauc_mrr_at_1000_std
value: -6.110358020715999
- type: nauc_mrr_at_100_diff1
value: 6.677624675636143
- type: nauc_mrr_at_100_max
value: -6.78097136036329
- type: nauc_mrr_at_100_std
value: -6.098217879471153
- type: nauc_mrr_at_10_diff1
value: 6.468832159598689
- type: nauc_mrr_at_10_max
value: -7.0315355572474925
- type: nauc_mrr_at_10_std
value: -6.601932672455336
- type: nauc_mrr_at_1_diff1
value: 9.07223439791323
- type: nauc_mrr_at_1_max
value: -9.264510377291506
- type: nauc_mrr_at_1_std
value: -6.764808343700734
- type: nauc_mrr_at_20_diff1
value: 6.65302226067872
- type: nauc_mrr_at_20_max
value: -6.666040499900585
- type: nauc_mrr_at_20_std
value: -6.132351790646591
- type: nauc_mrr_at_3_diff1
value: 5.824560443333769
- type: nauc_mrr_at_3_max
value: -7.573354775954246
- type: nauc_mrr_at_3_std
value: -6.106371480222379
- type: nauc_mrr_at_5_diff1
value: 6.209821468263958
- type: nauc_mrr_at_5_max
value: -7.271141379552105
- type: nauc_mrr_at_5_std
value: -5.938481110932588
- type: nauc_ndcg_at_1000_diff1
value: 7.773930949495924
- type: nauc_ndcg_at_1000_max
value: -5.1914799213542535
- type: nauc_ndcg_at_1000_std
value: -5.443963700763181
- type: nauc_ndcg_at_100_diff1
value: 8.057028087355645
- type: nauc_ndcg_at_100_max
value: -4.531668964685114
- type: nauc_ndcg_at_100_std
value: -5.043531367158232
- type: nauc_ndcg_at_10_diff1
value: 7.464635855577513
- type: nauc_ndcg_at_10_max
value: -4.878234464633695
- type: nauc_ndcg_at_10_std
value: -7.040243622992924
- type: nauc_ndcg_at_1_diff1
value: 10.018749983163797
- type: nauc_ndcg_at_1_max
value: -9.286741407015537
- type: nauc_ndcg_at_1_std
value: -6.604729499204554
- type: nauc_ndcg_at_20_diff1
value: 7.927592870050634
- type: nauc_ndcg_at_20_max
value: -3.5850025129078804
- type: nauc_ndcg_at_20_std
value: -5.171152516248472
- type: nauc_ndcg_at_3_diff1
value: 6.2883775843899485
- type: nauc_ndcg_at_3_max
value: -6.088799255371655
- type: nauc_ndcg_at_3_std
value: -5.718514280311179
- type: nauc_ndcg_at_5_diff1
value: 6.560041121192067
- type: nauc_ndcg_at_5_max
value: -5.667390479730649
- type: nauc_ndcg_at_5_std
value: -5.345467266005971
- type: nauc_precision_at_1000_diff1
value: 3.3584681799320566
- type: nauc_precision_at_1000_max
value: 27.67410378535401
- type: nauc_precision_at_1000_std
value: 73.59018487762006
- type: nauc_precision_at_100_diff1
value: 31.86229567780328
- type: nauc_precision_at_100_max
value: 57.759019425342615
- type: nauc_precision_at_100_std
value: 45.17932914356757
- type: nauc_precision_at_10_diff1
value: 7.59135628113755
- type: nauc_precision_at_10_max
value: 3.3516129835437254
- type: nauc_precision_at_10_std
value: -9.981248425456624
- type: nauc_precision_at_1_diff1
value: 10.018749983163797
- type: nauc_precision_at_1_max
value: -9.286741407015537
- type: nauc_precision_at_1_std
value: -6.604729499204554
- type: nauc_precision_at_20_diff1
value: 12.340895595423683
- type: nauc_precision_at_20_max
value: 22.834947429467178
- type: nauc_precision_at_20_std
value: 5.3105422687851425
- type: nauc_precision_at_3_diff1
value: 4.279842180460012
- type: nauc_precision_at_3_max
value: -3.6828818164493162
- type: nauc_precision_at_3_std
value: -4.735859463411824
- type: nauc_precision_at_5_diff1
value: 4.654912773566626
- type: nauc_precision_at_5_max
value: -2.0537304325752452
- type: nauc_precision_at_5_std
value: -3.419667795061248
- type: nauc_recall_at_1000_diff1
value: 3.358468179927671
- type: nauc_recall_at_1000_max
value: 27.674103785350603
- type: nauc_recall_at_1000_std
value: 73.59018487761793
- type: nauc_recall_at_100_diff1
value: 31.862295677802706
- type: nauc_recall_at_100_max
value: 57.75901942534214
- type: nauc_recall_at_100_std
value: 45.17932914356684
- type: nauc_recall_at_10_diff1
value: 7.591356281137633
- type: nauc_recall_at_10_max
value: 3.351612983543776
- type: nauc_recall_at_10_std
value: -9.981248425456481
- type: nauc_recall_at_1_diff1
value: 10.018749983163797
- type: nauc_recall_at_1_max
value: -9.286741407015537
- type: nauc_recall_at_1_std
value: -6.604729499204554
- type: nauc_recall_at_20_diff1
value: 12.340895595423826
- type: nauc_recall_at_20_max
value: 22.834947429467274
- type: nauc_recall_at_20_std
value: 5.310542268785199
- type: nauc_recall_at_3_diff1
value: 4.279842180460059
- type: nauc_recall_at_3_max
value: -3.682881816449298
- type: nauc_recall_at_3_std
value: -4.735859463411806
- type: nauc_recall_at_5_diff1
value: 4.6549127735666795
- type: nauc_recall_at_5_max
value: -2.0537304325752013
- type: nauc_recall_at_5_std
value: -3.419667795061247
- type: ndcg_at_1
value: 24.751
- type: ndcg_at_10
value: 49.668
- type: ndcg_at_100
value: 53.867
- type: ndcg_at_1000
value: 54.102
- type: ndcg_at_20
value: 52.34799999999999
- type: ndcg_at_3
value: 38.451
- type: ndcg_at_5
value: 44.069
- type: precision_at_1
value: 24.751
- type: precision_at_10
value: 7.965999999999999
- type: precision_at_100
value: 0.9780000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.4990000000000006
- type: precision_at_3
value: 16.216
- type: precision_at_5
value: 12.475
- type: recall_at_1
value: 24.751
- type: recall_at_10
value: 79.659
- type: recall_at_100
value: 97.795
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_20
value: 89.972
- type: recall_at_3
value: 48.649
- type: recall_at_5
value: 62.376
task:
type: Retrieval
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 62.85999999999999
- type: ap
value: 18.744713128220596
- type: ap_weighted
value: 18.744713128220596
- type: f1
value: 53.296341093646696
- type: f1_weighted
value: 68.61665005768842
- type: main_score
value: 62.85999999999999
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics:
- type: cosine_accuracy
value: 87.3
- type: cosine_accuracy_threshold
value: 95.8031415939331
- type: cosine_ap
value: 69.77225668650979
- type: cosine_f1
value: 63.04909560723513
- type: cosine_f1_threshold
value: 86.9259238243103
- type: cosine_precision
value: 61.92893401015228
- type: cosine_recall
value: 64.21052631578948
- type: dot_accuracy
value: 87.3
- type: dot_accuracy_threshold
value: 95.8031415939331
- type: dot_ap
value: 69.77225668650979
- type: dot_f1
value: 63.04909560723513
- type: dot_f1_threshold
value: 86.9259238243103
- type: dot_precision
value: 61.92893401015228
- type: dot_recall
value: 64.21052631578948
- type: euclidean_accuracy
value: 87.3
- type: euclidean_accuracy_threshold
value: 28.971904516220093
- type: euclidean_ap
value: 69.77225668650979
- type: euclidean_f1
value: 63.04909560723513
- type: euclidean_f1_threshold
value: 51.135218143463135
- type: euclidean_precision
value: 61.92893401015228
- type: euclidean_recall
value: 64.21052631578948
- type: main_score
value: 70.04616767691698
- type: manhattan_accuracy
value: 87.5
- type: manhattan_accuracy_threshold
value: 790.4520988464355
- type: manhattan_ap
value: 70.04616767691698
- type: manhattan_f1
value: 63.54166666666667
- type: manhattan_f1_threshold
value: 1195.075511932373
- type: manhattan_precision
value: 62.88659793814433
- type: manhattan_recall
value: 64.21052631578948
- type: max_ap
value: 70.04616767691698
- type: max_f1
value: 63.54166666666667
- type: max_precision
value: 62.88659793814433
- type: max_recall
value: 64.21052631578948
- type: similarity_accuracy
value: 87.3
- type: similarity_accuracy_threshold
value: 95.8031415939331
- type: similarity_ap
value: 69.77225668650979
- type: similarity_f1
value: 63.04909560723513
- type: similarity_f1_threshold
value: 86.9259238243103
- type: similarity_precision
value: 61.92893401015228
- type: similarity_recall
value: 64.21052631578948
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics:
- type: cosine_pearson
value: 90.1467539156439
- type: cosine_spearman
value: 90.37983178422222
- type: euclidean_pearson
value: 87.54100647769168
- type: euclidean_spearman
value: 90.37983178422222
- type: main_score
value: 90.37983178422222
- type: manhattan_pearson
value: 87.6231001602879
- type: manhattan_spearman
value: 90.52798044659546
- type: pearson
value: 90.1467539156439
- type: spearman
value: 90.37983178422222
task:
type: STS
- dataset:
config: default
name: MTEB DBPedia-PL
revision: 76afe41d9af165cc40999fcaa92312b8b012064a
split: test
type: clarin-knext/dbpedia-pl
metrics:
- type: main_score
value: 24.287
- type: map_at_1
value: 5.225
- type: map_at_10
value: 10.774000000000001
- type: map_at_100
value: 14.748
- type: map_at_1000
value: 15.836
- type: map_at_20
value: 12.27
- type: map_at_3
value: 7.724
- type: map_at_5
value: 9.246
- type: mrr_at_1
value: 39.25
- type: mrr_at_10
value: 50.17480158730157
- type: mrr_at_100
value: 50.8519822068327
- type: mrr_at_1000
value: 50.879556078064134
- type: mrr_at_20
value: 50.58405713438607
- type: mrr_at_3
value: 47.250000000000014
- type: mrr_at_5
value: 49.175000000000004
- type: nauc_map_at_1000_diff1
value: 20.045024346779645
- type: nauc_map_at_1000_max
value: 30.337666854953092
- type: nauc_map_at_1000_std
value: 26.557075239939543
- type: nauc_map_at_100_diff1
value: 19.9252316411722
- type: nauc_map_at_100_max
value: 28.226642852584742
- type: nauc_map_at_100_std
value: 22.914021046648696
- type: nauc_map_at_10_diff1
value: 26.566241524936572
- type: nauc_map_at_10_max
value: 21.748824204804716
- type: nauc_map_at_10_std
value: 8.638991435098609
- type: nauc_map_at_1_diff1
value: 36.393726837054814
- type: nauc_map_at_1_max
value: 16.477605805271057
- type: nauc_map_at_1_std
value: 0.5753087963352366
- type: nauc_map_at_20_diff1
value: 23.401102079878182
- type: nauc_map_at_20_max
value: 23.065898894709402
- type: nauc_map_at_20_std
value: 13.423353653712915
- type: nauc_map_at_3_diff1
value: 30.91796624589218
- type: nauc_map_at_3_max
value: 16.45545569680709
- type: nauc_map_at_3_std
value: 0.6366650378026352
- type: nauc_map_at_5_diff1
value: 28.80351568065496
- type: nauc_map_at_5_max
value: 19.084673921482615
- type: nauc_map_at_5_std
value: 4.139131073579019
- type: nauc_mrr_at_1000_diff1
value: 20.16962170000775
- type: nauc_mrr_at_1000_max
value: 38.15430502575843
- type: nauc_mrr_at_1000_std
value: 32.440668939436264
- type: nauc_mrr_at_100_diff1
value: 20.15910246738786
- type: nauc_mrr_at_100_max
value: 38.15774234365609
- type: nauc_mrr_at_100_std
value: 32.44872216192449
- type: nauc_mrr_at_10_diff1
value: 20.049148781541064
- type: nauc_mrr_at_10_max
value: 37.97309789914626
- type: nauc_mrr_at_10_std
value: 32.418004097599166
- type: nauc_mrr_at_1_diff1
value: 23.9620307539266
- type: nauc_mrr_at_1_max
value: 33.83610178961887
- type: nauc_mrr_at_1_std
value: 28.58448609419965
- type: nauc_mrr_at_20_diff1
value: 20.06080688488365
- type: nauc_mrr_at_20_max
value: 38.06868785040665
- type: nauc_mrr_at_20_std
value: 32.22384606323392
- type: nauc_mrr_at_3_diff1
value: 20.71531876285696
- type: nauc_mrr_at_3_max
value: 37.54485901132759
- type: nauc_mrr_at_3_std
value: 31.77679862739285
- type: nauc_mrr_at_5_diff1
value: 20.003442037824826
- type: nauc_mrr_at_5_max
value: 38.37916584335752
- type: nauc_mrr_at_5_std
value: 32.091488996264154
- type: nauc_ndcg_at_1000_diff1
value: 18.932875904116358
- type: nauc_ndcg_at_1000_max
value: 37.69461269411873
- type: nauc_ndcg_at_1000_std
value: 40.49355007241307
- type: nauc_ndcg_at_100_diff1
value: 18.62868572859794
- type: nauc_ndcg_at_100_max
value: 32.5251773358776
- type: nauc_ndcg_at_100_std
value: 34.17298333080795
- type: nauc_ndcg_at_10_diff1
value: 21.33571858413017
- type: nauc_ndcg_at_10_max
value: 32.95411878498034
- type: nauc_ndcg_at_10_std
value: 30.26350297086653
- type: nauc_ndcg_at_1_diff1
value: 25.698485822118034
- type: nauc_ndcg_at_1_max
value: 27.751178850383283
- type: nauc_ndcg_at_1_std
value: 25.499914018590097
- type: nauc_ndcg_at_20_diff1
value: 20.564620650130962
- type: nauc_ndcg_at_20_max
value: 29.636273615266877
- type: nauc_ndcg_at_20_std
value: 29.0657094246048
- type: nauc_ndcg_at_3_diff1
value: 21.331262925027644
- type: nauc_ndcg_at_3_max
value: 32.3211075722955
- type: nauc_ndcg_at_3_std
value: 29.30569912466711
- type: nauc_ndcg_at_5_diff1
value: 20.906573479242933
- type: nauc_ndcg_at_5_max
value: 33.817640032948255
- type: nauc_ndcg_at_5_std
value: 30.210587907489593
- type: nauc_precision_at_1000_diff1
value: 7.9336700303824905
- type: nauc_precision_at_1000_max
value: 25.382181071880133
- type: nauc_precision_at_1000_std
value: 45.03790857159645
- type: nauc_precision_at_100_diff1
value: -2.1616719372797286
- type: nauc_precision_at_100_max
value: 38.41562489705835
- type: nauc_precision_at_100_std
value: 51.0132959449221
- type: nauc_precision_at_10_diff1
value: 2.3699655796458936
- type: nauc_precision_at_10_max
value: 38.87889003229129
- type: nauc_precision_at_10_std
value: 43.071785955076145
- type: nauc_precision_at_1_diff1
value: 23.9620307539266
- type: nauc_precision_at_1_max
value: 33.83610178961887
- type: nauc_precision_at_1_std
value: 28.58448609419965
- type: nauc_precision_at_20_diff1
value: -0.5466417961649375
- type: nauc_precision_at_20_max
value: 36.55638995946497
- type: nauc_precision_at_20_std
value: 46.90182951874849
- type: nauc_precision_at_3_diff1
value: 9.180998281598255
- type: nauc_precision_at_3_max
value: 35.97368107639076
- type: nauc_precision_at_3_std
value: 34.362776108183525
- type: nauc_precision_at_5_diff1
value: 6.188700805809966
- type: nauc_precision_at_5_max
value: 39.69905715436714
- type: nauc_precision_at_5_std
value: 37.630912034924016
- type: nauc_recall_at_1000_diff1
value: 12.957700393477442
- type: nauc_recall_at_1000_max
value: 30.999439787327205
- type: nauc_recall_at_1000_std
value: 39.191755156518575
- type: nauc_recall_at_100_diff1
value: 12.761105551850163
- type: nauc_recall_at_100_max
value: 26.695898719215045
- type: nauc_recall_at_100_std
value: 29.150806165495208
- type: nauc_recall_at_10_diff1
value: 19.097397019523825
- type: nauc_recall_at_10_max
value: 18.259583702998956
- type: nauc_recall_at_10_std
value: 8.897590380469557
- type: nauc_recall_at_1_diff1
value: 36.393726837054814
- type: nauc_recall_at_1_max
value: 16.477605805271057
- type: nauc_recall_at_1_std
value: 0.5753087963352366
- type: nauc_recall_at_20_diff1
value: 14.751462451918885
- type: nauc_recall_at_20_max
value: 17.17387812389538
- type: nauc_recall_at_20_std
value: 11.686450060418395
- type: nauc_recall_at_3_diff1
value: 28.2693968902148
- type: nauc_recall_at_3_max
value: 15.503661857890341
- type: nauc_recall_at_3_std
value: -0.6006615114775526
- type: nauc_recall_at_5_diff1
value: 21.69553199450905
- type: nauc_recall_at_5_max
value: 16.68339699974409
- type: nauc_recall_at_5_std
value: 4.201309425242677
- type: ndcg_at_1
value: 29.375
- type: ndcg_at_10
value: 24.287
- type: ndcg_at_100
value: 28.457
- type: ndcg_at_1000
value: 35.412
- type: ndcg_at_20
value: 24.189
- type: ndcg_at_3
value: 25.813000000000002
- type: ndcg_at_5
value: 25.374999999999996
- type: precision_at_1
value: 39.25
- type: precision_at_10
value: 19.6
- type: precision_at_100
value: 6.2700000000000005
- type: precision_at_1000
value: 1.452
- type: precision_at_20
value: 14.499999999999998
- type: precision_at_3
value: 29.083
- type: precision_at_5
value: 25.75
- type: recall_at_1
value: 5.225
- type: recall_at_10
value: 16.258
- type: recall_at_100
value: 35.569
- type: recall_at_1000
value: 57.958
- type: recall_at_20
value: 21.178
- type: recall_at_3
value: 8.866999999999999
- type: recall_at_5
value: 12.404
task:
type: Retrieval
- dataset:
config: default
name: MTEB 8TagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: main_score
value: 37.96267113583295
- type: v_measure
value: 37.96267113583295
- type: v_measure_std
value: 2.6597621214046576
task:
type: Clustering
- dataset:
config: default
name: MTEB FiQA-PL
revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e
split: test
type: clarin-knext/fiqa-pl
metrics:
- type: main_score
value: 24.374000000000002
- type: map_at_1
value: 11.362
- type: map_at_10
value: 18.464
- type: map_at_100
value: 19.791
- type: map_at_1000
value: 19.994
- type: map_at_20
value: 19.156000000000002
- type: map_at_3
value: 15.937000000000001
- type: map_at_5
value: 17.127
- type: mrr_at_1
value: 22.376543209876544
- type: mrr_at_10
value: 30.046724965706435
- type: mrr_at_100
value: 30.99706976191228
- type: mrr_at_1000
value: 31.076490053822308
- type: mrr_at_20
value: 30.59052580100912
- type: mrr_at_3
value: 27.854938271604944
- type: mrr_at_5
value: 28.912037037037035
- type: nauc_map_at_1000_diff1
value: 34.07557471766689
- type: nauc_map_at_1000_max
value: 24.91982727448087
- type: nauc_map_at_1000_std
value: 12.494927606505051
- type: nauc_map_at_100_diff1
value: 34.06635556229055
- type: nauc_map_at_100_max
value: 24.777935848367225
- type: nauc_map_at_100_std
value: 12.362066428153456
- type: nauc_map_at_10_diff1
value: 34.3306140967635
- type: nauc_map_at_10_max
value: 24.086194195608087
- type: nauc_map_at_10_std
value: 11.127465863787245
- type: nauc_map_at_1_diff1
value: 38.942215866162314
- type: nauc_map_at_1_max
value: 23.63998402727614
- type: nauc_map_at_1_std
value: 9.728241161220097
- type: nauc_map_at_20_diff1
value: 34.04736858130041
- type: nauc_map_at_20_max
value: 24.30446046409803
- type: nauc_map_at_20_std
value: 11.82019676487291
- type: nauc_map_at_3_diff1
value: 34.99965810997492
- type: nauc_map_at_3_max
value: 22.472906083967082
- type: nauc_map_at_3_std
value: 9.698945379216992
- type: nauc_map_at_5_diff1
value: 34.42282748114895
- type: nauc_map_at_5_max
value: 23.633268720383512
- type: nauc_map_at_5_std
value: 10.382815603500871
- type: nauc_mrr_at_1000_diff1
value: 34.704948586037965
- type: nauc_mrr_at_1000_max
value: 28.94016888494416
- type: nauc_mrr_at_1000_std
value: 13.914193825823684
- type: nauc_mrr_at_100_diff1
value: 34.67910995484378
- type: nauc_mrr_at_100_max
value: 28.90011297894453
- type: nauc_mrr_at_100_std
value: 13.870339909485788
- type: nauc_mrr_at_10_diff1
value: 34.97862910055978
- type: nauc_mrr_at_10_max
value: 28.891213481314647
- type: nauc_mrr_at_10_std
value: 13.632668727631797
- type: nauc_mrr_at_1_diff1
value: 36.9016752358079
- type: nauc_mrr_at_1_max
value: 30.89530420046735
- type: nauc_mrr_at_1_std
value: 14.386684064942584
- type: nauc_mrr_at_20_diff1
value: 34.73839610262596
- type: nauc_mrr_at_20_max
value: 28.705251186157255
- type: nauc_mrr_at_20_std
value: 13.753299339901334
- type: nauc_mrr_at_3_diff1
value: 34.76877538539127
- type: nauc_mrr_at_3_max
value: 28.77723698514852
- type: nauc_mrr_at_3_std
value: 13.717153469537122
- type: nauc_mrr_at_5_diff1
value: 34.32426309461695
- type: nauc_mrr_at_5_max
value: 28.620967773156714
- type: nauc_mrr_at_5_std
value: 13.382881213134276
- type: nauc_ndcg_at_1000_diff1
value: 32.77974173034191
- type: nauc_ndcg_at_1000_max
value: 28.36858648028177
- type: nauc_ndcg_at_1000_std
value: 17.55654423858263
- type: nauc_ndcg_at_100_diff1
value: 32.632483073737255
- type: nauc_ndcg_at_100_max
value: 26.296829067224515
- type: nauc_ndcg_at_100_std
value: 15.901063315847802
- type: nauc_ndcg_at_10_diff1
value: 33.951354557048134
- type: nauc_ndcg_at_10_max
value: 24.502438497165578
- type: nauc_ndcg_at_10_std
value: 12.270853057785972
- type: nauc_ndcg_at_1_diff1
value: 36.9016752358079
- type: nauc_ndcg_at_1_max
value: 30.89530420046735
- type: nauc_ndcg_at_1_std
value: 14.386684064942584
- type: nauc_ndcg_at_20_diff1
value: 33.28593916274325
- type: nauc_ndcg_at_20_max
value: 24.5380040373484
- type: nauc_ndcg_at_20_std
value: 13.863409012751617
- type: nauc_ndcg_at_3_diff1
value: 34.03004915907343
- type: nauc_ndcg_at_3_max
value: 25.366810943178187
- type: nauc_ndcg_at_3_std
value: 11.99466470963204
- type: nauc_ndcg_at_5_diff1
value: 33.75108435164904
- type: nauc_ndcg_at_5_max
value: 24.89793255411985
- type: nauc_ndcg_at_5_std
value: 11.213101565189755
- type: nauc_precision_at_1000_diff1
value: 8.88694146912782
- type: nauc_precision_at_1000_max
value: 28.194369745942677
- type: nauc_precision_at_1000_std
value: 15.075895083755153
- type: nauc_precision_at_100_diff1
value: 17.33142606816351
- type: nauc_precision_at_100_max
value: 30.560210907187134
- type: nauc_precision_at_100_std
value: 20.006767151320354
- type: nauc_precision_at_10_diff1
value: 27.325474826111495
- type: nauc_precision_at_10_max
value: 28.37196490647728
- type: nauc_precision_at_10_std
value: 14.398272703295254
- type: nauc_precision_at_1_diff1
value: 36.9016752358079
- type: nauc_precision_at_1_max
value: 30.89530420046735
- type: nauc_precision_at_1_std
value: 14.386684064942584
- type: nauc_precision_at_20_diff1
value: 24.927890600833123
- type: nauc_precision_at_20_max
value: 28.6077759408292
- type: nauc_precision_at_20_std
value: 16.922212691823013
- type: nauc_precision_at_3_diff1
value: 30.157161086783603
- type: nauc_precision_at_3_max
value: 27.80088080445145
- type: nauc_precision_at_3_std
value: 13.767444960442354
- type: nauc_precision_at_5_diff1
value: 27.22177598160483
- type: nauc_precision_at_5_max
value: 28.126925412497698
- type: nauc_precision_at_5_std
value: 12.668302840263246
- type: nauc_recall_at_1000_diff1
value: 13.021138171238658
- type: nauc_recall_at_1000_max
value: 29.086331163283578
- type: nauc_recall_at_1000_std
value: 40.165920815231445
- type: nauc_recall_at_100_diff1
value: 20.32032544663283
- type: nauc_recall_at_100_max
value: 19.52693905173919
- type: nauc_recall_at_100_std
value: 21.472521389265815
- type: nauc_recall_at_10_diff1
value: 27.863602171901302
- type: nauc_recall_at_10_max
value: 17.4718078150182
- type: nauc_recall_at_10_std
value: 11.474638155937823
- type: nauc_recall_at_1_diff1
value: 38.942215866162314
- type: nauc_recall_at_1_max
value: 23.63998402727614
- type: nauc_recall_at_1_std
value: 9.728241161220097
- type: nauc_recall_at_20_diff1
value: 24.72857110907966
- type: nauc_recall_at_20_max
value: 16.357016524448234
- type: nauc_recall_at_20_std
value: 15.437317261627213
- type: nauc_recall_at_3_diff1
value: 29.883191548110638
- type: nauc_recall_at_3_max
value: 16.895714663542783
- type: nauc_recall_at_3_std
value: 8.976963489103756
- type: nauc_recall_at_5_diff1
value: 28.877062029269666
- type: nauc_recall_at_5_max
value: 18.25013882823951
- type: nauc_recall_at_5_std
value: 9.760614924170874
- type: ndcg_at_1
value: 22.377
- type: ndcg_at_10
value: 24.374000000000002
- type: ndcg_at_100
value: 30.166999999999998
- type: ndcg_at_1000
value: 34.443
- type: ndcg_at_20
value: 26.457000000000004
- type: ndcg_at_3
value: 21.248
- type: ndcg_at_5
value: 21.976000000000003
- type: precision_at_1
value: 22.377
- type: precision_at_10
value: 6.851999999999999
- type: precision_at_100
value: 1.269
- type: precision_at_1000
value: 0.2
- type: precision_at_20
value: 4.252000000000001
- type: precision_at_3
value: 14.146
- type: precision_at_5
value: 10.432
- type: recall_at_1
value: 11.362
- type: recall_at_10
value: 30.416999999999998
- type: recall_at_100
value: 52.547
- type: recall_at_1000
value: 79.107
- type: recall_at_20
value: 36.927
- type: recall_at_3
value: 19.888
- type: recall_at_5
value: 23.294
task:
type: Retrieval
- dataset:
config: default
name: MTEB HotpotQA-PL
revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907
split: test
type: clarin-knext/hotpotqa-pl
metrics:
- type: main_score
value: 60.289
- type: map_at_1
value: 35.522999999999996
- type: map_at_10
value: 51.18000000000001
- type: map_at_100
value: 52.051
- type: map_at_1000
value: 52.122
- type: map_at_20
value: 51.673
- type: map_at_3
value: 48.246
- type: map_at_5
value: 50.019999999999996
- type: mrr_at_1
value: 71.04659014179609
- type: mrr_at_10
value: 77.46602467230403
- type: mrr_at_100
value: 77.71701045856283
- type: mrr_at_1000
value: 77.73109333465572
- type: mrr_at_20
value: 77.61606030291657
- type: mrr_at_3
value: 76.2975467026782
- type: mrr_at_5
value: 77.01530497411626
- type: nauc_map_at_1000_diff1
value: 27.072398495156897
- type: nauc_map_at_1000_max
value: 29.92494925850584
- type: nauc_map_at_1000_std
value: 6.122920064016644
- type: nauc_map_at_100_diff1
value: 27.045953237574043
- type: nauc_map_at_100_max
value: 29.91135310131925
- type: nauc_map_at_100_std
value: 6.102830174452808
- type: nauc_map_at_10_diff1
value: 27.14260536879246
- type: nauc_map_at_10_max
value: 29.786180574275033
- type: nauc_map_at_10_std
value: 5.48071498058778
- type: nauc_map_at_1_diff1
value: 71.43831250406643
- type: nauc_map_at_1_max
value: 50.69918783298206
- type: nauc_map_at_1_std
value: 4.065732274269463
- type: nauc_map_at_20_diff1
value: 26.985158932169607
- type: nauc_map_at_20_max
value: 29.769499559141337
- type: nauc_map_at_20_std
value: 5.7846108079403225
- type: nauc_map_at_3_diff1
value: 28.726407496616453
- type: nauc_map_at_3_max
value: 30.257904231332596
- type: nauc_map_at_3_std
value: 4.176791477760867
- type: nauc_map_at_5_diff1
value: 27.599671019792364
- type: nauc_map_at_5_max
value: 29.837459984143866
- type: nauc_map_at_5_std
value: 4.724857569088119
- type: nauc_mrr_at_1000_diff1
value: 69.74462431507696
- type: nauc_mrr_at_1000_max
value: 53.47426820826111
- type: nauc_mrr_at_1000_std
value: 7.017278438144492
- type: nauc_mrr_at_100_diff1
value: 69.7417920598051
- type: nauc_mrr_at_100_max
value: 53.48046534979321
- type: nauc_mrr_at_100_std
value: 7.024164329244427
- type: nauc_mrr_at_10_diff1
value: 69.67042683609824
- type: nauc_mrr_at_10_max
value: 53.481642001920314
- type: nauc_mrr_at_10_std
value: 6.916088911861879
- type: nauc_mrr_at_1_diff1
value: 71.43831250406643
- type: nauc_mrr_at_1_max
value: 50.69918783298206
- type: nauc_mrr_at_1_std
value: 4.065732274269463
- type: nauc_mrr_at_20_diff1
value: 69.69097669322561
- type: nauc_mrr_at_20_max
value: 53.48254877660139
- type: nauc_mrr_at_20_std
value: 6.954450273756836
- type: nauc_mrr_at_3_diff1
value: 69.65550049564045
- type: nauc_mrr_at_3_max
value: 53.423078677284806
- type: nauc_mrr_at_3_std
value: 6.824360632333201
- type: nauc_mrr_at_5_diff1
value: 69.85902124700681
- type: nauc_mrr_at_5_max
value: 53.71608187586825
- type: nauc_mrr_at_5_std
value: 6.90332690250169
- type: nauc_ndcg_at_1000_diff1
value: 32.371178459639395
- type: nauc_ndcg_at_1000_max
value: 34.193107156520355
- type: nauc_ndcg_at_1000_std
value: 9.981416864706453
- type: nauc_ndcg_at_100_diff1
value: 31.65178281180327
- type: nauc_ndcg_at_100_max
value: 33.88863515144708
- type: nauc_ndcg_at_100_std
value: 9.675400500125894
- type: nauc_ndcg_at_10_diff1
value: 32.09701979495255
- type: nauc_ndcg_at_10_max
value: 33.50276312450072
- type: nauc_ndcg_at_10_std
value: 7.191084522028669
- type: nauc_ndcg_at_1_diff1
value: 71.43831250406643
- type: nauc_ndcg_at_1_max
value: 50.69918783298206
- type: nauc_ndcg_at_1_std
value: 4.065732274269463
- type: nauc_ndcg_at_20_diff1
value: 31.562637576493692
- type: nauc_ndcg_at_20_max
value: 33.34017245498174
- type: nauc_ndcg_at_20_std
value: 7.969235939844162
- type: nauc_ndcg_at_3_diff1
value: 35.18977207313904
- type: nauc_ndcg_at_3_max
value: 34.673975073641905
- type: nauc_ndcg_at_3_std
value: 5.325459274582688
- type: nauc_ndcg_at_5_diff1
value: 33.38000278537343
- type: nauc_ndcg_at_5_max
value: 33.97918169254012
- type: nauc_ndcg_at_5_std
value: 5.978030273125264
- type: nauc_precision_at_1000_diff1
value: 2.024497553431021
- type: nauc_precision_at_1000_max
value: 19.574506433204107
- type: nauc_precision_at_1000_std
value: 28.192550360040663
- type: nauc_precision_at_100_diff1
value: 5.188258524609947
- type: nauc_precision_at_100_max
value: 21.306662841801312
- type: nauc_precision_at_100_std
value: 20.7260402080751
- type: nauc_precision_at_10_diff1
value: 12.855802595061384
- type: nauc_precision_at_10_max
value: 23.683240963949206
- type: nauc_precision_at_10_std
value: 9.888003594834135
- type: nauc_precision_at_1_diff1
value: 71.43831250406643
- type: nauc_precision_at_1_max
value: 50.69918783298206
- type: nauc_precision_at_1_std
value: 4.065732274269463
- type: nauc_precision_at_20_diff1
value: 9.630280191534592
- type: nauc_precision_at_20_max
value: 21.779527509411878
- type: nauc_precision_at_20_std
value: 12.159865759201564
- type: nauc_precision_at_3_diff1
value: 21.486219885493664
- type: nauc_precision_at_3_max
value: 28.180666352570384
- type: nauc_precision_at_3_std
value: 5.975796262301398
- type: nauc_precision_at_5_diff1
value: 16.91219034941122
- type: nauc_precision_at_5_max
value: 25.631420440783632
- type: nauc_precision_at_5_std
value: 7.008210555798029
- type: nauc_recall_at_1000_diff1
value: 2.0244975534313734
- type: nauc_recall_at_1000_max
value: 19.574506433204146
- type: nauc_recall_at_1000_std
value: 28.192550360040826
- type: nauc_recall_at_100_diff1
value: 5.188258524609966
- type: nauc_recall_at_100_max
value: 21.306662841801195
- type: nauc_recall_at_100_std
value: 20.72604020807505
- type: nauc_recall_at_10_diff1
value: 12.85580259506137
- type: nauc_recall_at_10_max
value: 23.68324096394915
- type: nauc_recall_at_10_std
value: 9.888003594834109
- type: nauc_recall_at_1_diff1
value: 71.43831250406643
- type: nauc_recall_at_1_max
value: 50.69918783298206
- type: nauc_recall_at_1_std
value: 4.065732274269463
- type: nauc_recall_at_20_diff1
value: 9.630280191534691
- type: nauc_recall_at_20_max
value: 21.779527509411942
- type: nauc_recall_at_20_std
value: 12.159865759201631
- type: nauc_recall_at_3_diff1
value: 21.486219885493682
- type: nauc_recall_at_3_max
value: 28.18066635257036
- type: nauc_recall_at_3_std
value: 5.975796262301328
- type: nauc_recall_at_5_diff1
value: 16.912190349411212
- type: nauc_recall_at_5_max
value: 25.631420440783636
- type: nauc_recall_at_5_std
value: 7.00821055579809
- type: ndcg_at_1
value: 71.04700000000001
- type: ndcg_at_10
value: 60.289
- type: ndcg_at_100
value: 63.499
- type: ndcg_at_1000
value: 64.97500000000001
- type: ndcg_at_20
value: 61.550000000000004
- type: ndcg_at_3
value: 55.901999999999994
- type: ndcg_at_5
value: 58.25
- type: precision_at_1
value: 71.04700000000001
- type: precision_at_10
value: 12.44
- type: precision_at_100
value: 1.498
- type: precision_at_1000
value: 0.169
- type: precision_at_20
value: 6.626
- type: precision_at_3
value: 34.976
- type: precision_at_5
value: 22.839000000000002
- type: recall_at_1
value: 35.522999999999996
- type: recall_at_10
value: 62.20099999999999
- type: recall_at_100
value: 74.91600000000001
- type: recall_at_1000
value: 84.74000000000001
- type: recall_at_20
value: 66.259
- type: recall_at_3
value: 52.464999999999996
- type: recall_at_5
value: 57.096999999999994
task:
type: Retrieval
- dataset:
config: default
name: MTEB MSMARCO-PL
revision: 8634c07806d5cce3a6138e260e59b81760a0a640
split: test
type: clarin-knext/msmarco-pl
metrics:
- type: main_score
value: 35.347
- type: map_at_1
value: 1.469
- type: map_at_10
value: 6.271
- type: map_at_100
value: 15.82
- type: map_at_1000
value: 19.756999999999998
- type: map_at_20
value: 9.132
- type: map_at_3
value: 3.075
- type: map_at_5
value: 4.191000000000001
- type: mrr_at_1
value: 51.162790697674424
- type: mrr_at_10
value: 61.57253599114064
- type: mrr_at_100
value: 61.70237312252635
- type: mrr_at_1000
value: 61.721282111697384
- type: mrr_at_20
value: 61.57253599114064
- type: mrr_at_3
value: 58.52713178294573
- type: mrr_at_5
value: 60.62015503875969
- type: nauc_map_at_1000_diff1
value: -6.26148455784313
- type: nauc_map_at_1000_max
value: 70.23579046863748
- type: nauc_map_at_1000_std
value: 77.42651490963746
- type: nauc_map_at_100_diff1
value: -1.4053806773143986
- type: nauc_map_at_100_max
value: 66.71686830976711
- type: nauc_map_at_100_std
value: 67.38852619857126
- type: nauc_map_at_10_diff1
value: 12.864067292274589
- type: nauc_map_at_10_max
value: 41.38716748783301
- type: nauc_map_at_10_std
value: 32.51689180198407
- type: nauc_map_at_1_diff1
value: -1.536748365124193
- type: nauc_map_at_1_max
value: -6.088587734229212
- type: nauc_map_at_1_std
value: -18.068863144899694
- type: nauc_map_at_20_diff1
value: 8.54318633682049
- type: nauc_map_at_20_max
value: 51.46280940802795
- type: nauc_map_at_20_std
value: 43.84995568398171
- type: nauc_map_at_3_diff1
value: 15.549945155617095
- type: nauc_map_at_3_max
value: 16.423852501631057
- type: nauc_map_at_3_std
value: 1.6301262698881138
- type: nauc_map_at_5_diff1
value: 17.143995737313784
- type: nauc_map_at_5_max
value: 25.892894000158563
- type: nauc_map_at_5_std
value: 13.91119386484427
- type: nauc_mrr_at_1000_diff1
value: 20.75486837047241
- type: nauc_mrr_at_1000_max
value: 48.77384161141147
- type: nauc_mrr_at_1000_std
value: 39.42169406046163
- type: nauc_mrr_at_100_diff1
value: 20.75098937410054
- type: nauc_mrr_at_100_max
value: 48.8055136010899
- type: nauc_mrr_at_100_std
value: 39.44826676492212
- type: nauc_mrr_at_10_diff1
value: 20.55168287172998
- type: nauc_mrr_at_10_max
value: 48.92605606155999
- type: nauc_mrr_at_10_std
value: 39.56397190201471
- type: nauc_mrr_at_1_diff1
value: 27.952914840599213
- type: nauc_mrr_at_1_max
value: 43.02872038128348
- type: nauc_mrr_at_1_std
value: 30.72899446812769
- type: nauc_mrr_at_20_diff1
value: 20.55168287172998
- type: nauc_mrr_at_20_max
value: 48.92605606155999
- type: nauc_mrr_at_20_std
value: 39.56397190201471
- type: nauc_mrr_at_3_diff1
value: 18.318386717289272
- type: nauc_mrr_at_3_max
value: 47.44180800437328
- type: nauc_mrr_at_3_std
value: 38.74641539481817
- type: nauc_mrr_at_5_diff1
value: 21.683568755627515
- type: nauc_mrr_at_5_max
value: 48.05001286700342
- type: nauc_mrr_at_5_std
value: 38.244355740197555
- type: nauc_ndcg_at_1000_diff1
value: -2.468906090162698
- type: nauc_ndcg_at_1000_max
value: 65.57871617608374
- type: nauc_ndcg_at_1000_std
value: 73.3847445547649
- type: nauc_ndcg_at_100_diff1
value: -2.586690833939304
- type: nauc_ndcg_at_100_max
value: 64.70786040635376
- type: nauc_ndcg_at_100_std
value: 70.64166116490425
- type: nauc_ndcg_at_10_diff1
value: 8.118353402716513
- type: nauc_ndcg_at_10_max
value: 49.844180236352955
- type: nauc_ndcg_at_10_std
value: 50.131893853105936
- type: nauc_ndcg_at_1_diff1
value: 29.009521103694098
- type: nauc_ndcg_at_1_max
value: 27.087717021875612
- type: nauc_ndcg_at_1_std
value: 12.6909059627947
- type: nauc_ndcg_at_20_diff1
value: 2.598718647600475
- type: nauc_ndcg_at_20_max
value: 53.91164998936515
- type: nauc_ndcg_at_20_std
value: 56.516639941588664
- type: nauc_ndcg_at_3_diff1
value: 23.836185343273044
- type: nauc_ndcg_at_3_max
value: 36.263454561458765
- type: nauc_ndcg_at_3_std
value: 28.43323538514256
- type: nauc_ndcg_at_5_diff1
value: 16.77391181835752
- type: nauc_ndcg_at_5_max
value: 43.296899586211104
- type: nauc_ndcg_at_5_std
value: 39.1824699044313
- type: nauc_precision_at_1000_diff1
value: -15.186803611287433
- type: nauc_precision_at_1000_max
value: 46.85780719962127
- type: nauc_precision_at_1000_std
value: 70.3960638613034
- type: nauc_precision_at_100_diff1
value: -15.422155872405632
- type: nauc_precision_at_100_max
value: 55.72313908696537
- type: nauc_precision_at_100_std
value: 76.82533899336994
- type: nauc_precision_at_10_diff1
value: -3.067825687414238
- type: nauc_precision_at_10_max
value: 56.91434531209
- type: nauc_precision_at_10_std
value: 66.04691744928004
- type: nauc_precision_at_1_diff1
value: 27.952914840599213
- type: nauc_precision_at_1_max
value: 43.02872038128348
- type: nauc_precision_at_1_std
value: 30.72899446812769
- type: nauc_precision_at_20_diff1
value: -5.544645405468878
- type: nauc_precision_at_20_max
value: 57.8695034639674
- type: nauc_precision_at_20_std
value: 68.93286041931582
- type: nauc_precision_at_3_diff1
value: 20.19348967585854
- type: nauc_precision_at_3_max
value: 45.597437337579386
- type: nauc_precision_at_3_std
value: 42.03959265688183
- type: nauc_precision_at_5_diff1
value: 9.23998523103908
- type: nauc_precision_at_5_max
value: 49.25574086871373
- type: nauc_precision_at_5_std
value: 52.88526969215077
- type: nauc_recall_at_1000_diff1
value: -8.862740141707581
- type: nauc_recall_at_1000_max
value: 55.712545242253256
- type: nauc_recall_at_1000_std
value: 67.30648023155955
- type: nauc_recall_at_100_diff1
value: -3.1881977191212036
- type: nauc_recall_at_100_max
value: 51.673275503044906
- type: nauc_recall_at_100_std
value: 54.48134578839626
- type: nauc_recall_at_10_diff1
value: 13.364983119491827
- type: nauc_recall_at_10_max
value: 36.25593546742792
- type: nauc_recall_at_10_std
value: 27.09713611846276
- type: nauc_recall_at_1_diff1
value: -1.536748365124193
- type: nauc_recall_at_1_max
value: -6.088587734229212
- type: nauc_recall_at_1_std
value: -18.068863144899694
- type: nauc_recall_at_20_diff1
value: 7.510007055555984
- type: nauc_recall_at_20_max
value: 38.09054135617318
- type: nauc_recall_at_20_std
value: 30.40674848457391
- type: nauc_recall_at_3_diff1
value: 14.714490489795676
- type: nauc_recall_at_3_max
value: 13.456002270727083
- type: nauc_recall_at_3_std
value: -1.5169948432854514
- type: nauc_recall_at_5_diff1
value: 15.54314759180975
- type: nauc_recall_at_5_max
value: 21.228461904073818
- type: nauc_recall_at_5_std
value: 9.414065747326763
- type: ndcg_at_1
value: 40.31
- type: ndcg_at_10
value: 35.347
- type: ndcg_at_100
value: 33.467
- type: ndcg_at_1000
value: 40.681
- type: ndcg_at_20
value: 34.001
- type: ndcg_at_3
value: 37.366
- type: ndcg_at_5
value: 36.394
- type: precision_at_1
value: 51.163000000000004
- type: precision_at_10
value: 44.186
- type: precision_at_100
value: 20.837
- type: precision_at_1000
value: 4.2299999999999995
- type: precision_at_20
value: 37.442
- type: precision_at_3
value: 50.388
- type: precision_at_5
value: 48.837
- type: recall_at_1
value: 1.469
- type: recall_at_10
value: 7.9479999999999995
- type: recall_at_100
value: 28.733999999999998
- type: recall_at_1000
value: 50.297000000000004
- type: recall_at_20
value: 12.948
- type: recall_at_3
value: 3.4259999999999997
- type: recall_at_5
value: 4.9110000000000005
task:
type: Retrieval
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 51.19031607262945
- type: f1
value: 46.10258936993461
- type: f1_weighted
value: 50.901181253035034
- type: main_score
value: 51.19031607262945
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 60.645595158036315
- type: f1
value: 59.44482127439026
- type: f1_weighted
value: 60.168807528534984
- type: main_score
value: 60.645595158036315
task:
type: Classification
- dataset:
config: default
name: MTEB NFCorpus-PL
revision: 9a6f9567fda928260afed2de480d79c98bf0bec0
split: test
type: clarin-knext/nfcorpus-pl
metrics:
- type: main_score
value: 25.395
- type: map_at_1
value: 4.162
- type: map_at_10
value: 8.706
- type: map_at_100
value: 10.825
- type: map_at_1000
value: 11.882
- type: map_at_20
value: 9.699
- type: map_at_3
value: 6.370000000000001
- type: map_at_5
value: 7.392
- type: mrr_at_1
value: 36.22291021671827
- type: mrr_at_10
value: 43.31662489557226
- type: mrr_at_100
value: 44.034094585948445
- type: mrr_at_1000
value: 44.08497362710692
- type: mrr_at_20
value: 43.73522641310121
- type: mrr_at_3
value: 41.17647058823529
- type: mrr_at_5
value: 42.19814241486068
- type: nauc_map_at_1000_diff1
value: 20.409989638127485
- type: nauc_map_at_1000_max
value: 21.313793692439358
- type: nauc_map_at_1000_std
value: 26.453432767218242
- type: nauc_map_at_100_diff1
value: 21.1324476885251
- type: nauc_map_at_100_max
value: 20.162732858714488
- type: nauc_map_at_100_std
value: 23.299208899543444
- type: nauc_map_at_10_diff1
value: 25.356667770298184
- type: nauc_map_at_10_max
value: 14.593319794998328
- type: nauc_map_at_10_std
value: 14.307985847242206
- type: nauc_map_at_1_diff1
value: 49.48663924597492
- type: nauc_map_at_1_max
value: 6.253498999289057
- type: nauc_map_at_1_std
value: -1.0237763936348632
- type: nauc_map_at_20_diff1
value: 23.25076257190515
- type: nauc_map_at_20_max
value: 18.067585719861558
- type: nauc_map_at_20_std
value: 18.661482884581616
- type: nauc_map_at_3_diff1
value: 36.09641802781903
- type: nauc_map_at_3_max
value: 10.438404957893699
- type: nauc_map_at_3_std
value: 6.545314741707626
- type: nauc_map_at_5_diff1
value: 31.563017185316582
- type: nauc_map_at_5_max
value: 10.624857568430182
- type: nauc_map_at_5_std
value: 8.071135835564556
- type: nauc_mrr_at_1000_diff1
value: 25.914988046957298
- type: nauc_mrr_at_1000_max
value: 29.500178958357004
- type: nauc_mrr_at_1000_std
value: 30.007836859386217
- type: nauc_mrr_at_100_diff1
value: 25.909334138328415
- type: nauc_mrr_at_100_max
value: 29.52338779009421
- type: nauc_mrr_at_100_std
value: 30.04513581497261
- type: nauc_mrr_at_10_diff1
value: 25.8265466125622
- type: nauc_mrr_at_10_max
value: 29.190136722031696
- type: nauc_mrr_at_10_std
value: 29.91591104432339
- type: nauc_mrr_at_1_diff1
value: 28.59348773396338
- type: nauc_mrr_at_1_max
value: 24.8079752457763
- type: nauc_mrr_at_1_std
value: 23.91126072409742
- type: nauc_mrr_at_20_diff1
value: 25.802689022704183
- type: nauc_mrr_at_20_max
value: 29.530951070963336
- type: nauc_mrr_at_20_std
value: 30.174133821321725
- type: nauc_mrr_at_3_diff1
value: 27.20001662389779
- type: nauc_mrr_at_3_max
value: 27.937268010329507
- type: nauc_mrr_at_3_std
value: 28.192212081421474
- type: nauc_mrr_at_5_diff1
value: 25.808760122402813
- type: nauc_mrr_at_5_max
value: 28.320555828208317
- type: nauc_mrr_at_5_std
value: 28.94783269529472
- type: nauc_ndcg_at_1000_diff1
value: 18.382064145005554
- type: nauc_ndcg_at_1000_max
value: 37.682973683950046
- type: nauc_ndcg_at_1000_std
value: 41.50740480181961
- type: nauc_ndcg_at_100_diff1
value: 17.064373462803946
- type: nauc_ndcg_at_100_max
value: 31.68841170112502
- type: nauc_ndcg_at_100_std
value: 36.129889624470515
- type: nauc_ndcg_at_10_diff1
value: 13.4115588783113
- type: nauc_ndcg_at_10_max
value: 25.02525617768273
- type: nauc_ndcg_at_10_std
value: 34.6721573881345
- type: nauc_ndcg_at_1_diff1
value: 29.894042590382835
- type: nauc_ndcg_at_1_max
value: 20.74535829394909
- type: nauc_ndcg_at_1_std
value: 22.120360699896317
- type: nauc_ndcg_at_20_diff1
value: 15.634409370114245
- type: nauc_ndcg_at_20_max
value: 26.50893784943651
- type: nauc_ndcg_at_20_std
value: 35.038198867324475
- type: nauc_ndcg_at_3_diff1
value: 18.96300171211221
- type: nauc_ndcg_at_3_max
value: 23.33029230184083
- type: nauc_ndcg_at_3_std
value: 29.920377781867707
- type: nauc_ndcg_at_5_diff1
value: 15.79868149715457
- type: nauc_ndcg_at_5_max
value: 22.579264404978712
- type: nauc_ndcg_at_5_std
value: 30.211799699921738
- type: nauc_precision_at_1000_diff1
value: -6.199888311259285
- type: nauc_precision_at_1000_max
value: 9.309794448376303
- type: nauc_precision_at_1000_std
value: 31.78959217396635
- type: nauc_precision_at_100_diff1
value: -6.136903664719646
- type: nauc_precision_at_100_max
value: 22.013385001054626
- type: nauc_precision_at_100_std
value: 48.14689780650813
- type: nauc_precision_at_10_diff1
value: -4.853429266457739
- type: nauc_precision_at_10_max
value: 27.509406452527795
- type: nauc_precision_at_10_std
value: 46.374536894242596
- type: nauc_precision_at_1_diff1
value: 28.59348773396338
- type: nauc_precision_at_1_max
value: 24.8079752457763
- type: nauc_precision_at_1_std
value: 23.91126072409742
- type: nauc_precision_at_20_diff1
value: -3.1905789371666917
- type: nauc_precision_at_20_max
value: 27.176658491295246
- type: nauc_precision_at_20_std
value: 48.18584487920634
- type: nauc_precision_at_3_diff1
value: 8.3848103781276
- type: nauc_precision_at_3_max
value: 27.892039299948824
- type: nauc_precision_at_3_std
value: 36.43253708925813
- type: nauc_precision_at_5_diff1
value: 2.196790718752423
- type: nauc_precision_at_5_max
value: 25.498636373099792
- type: nauc_precision_at_5_std
value: 37.223277286205686
- type: nauc_recall_at_1000_diff1
value: 9.6168415443447
- type: nauc_recall_at_1000_max
value: 30.81068257150451
- type: nauc_recall_at_1000_std
value: 31.23012946206547
- type: nauc_recall_at_100_diff1
value: 8.288803190895507
- type: nauc_recall_at_100_max
value: 28.5985358200399
- type: nauc_recall_at_100_std
value: 29.264243501743554
- type: nauc_recall_at_10_diff1
value: 15.538928611457752
- type: nauc_recall_at_10_max
value: 16.507431812158853
- type: nauc_recall_at_10_std
value: 14.357359644755332
- type: nauc_recall_at_1_diff1
value: 49.48663924597492
- type: nauc_recall_at_1_max
value: 6.253498999289057
- type: nauc_recall_at_1_std
value: -1.0237763936348632
- type: nauc_recall_at_20_diff1
value: 12.33220171683594
- type: nauc_recall_at_20_max
value: 21.401205102336334
- type: nauc_recall_at_20_std
value: 19.894796654272344
- type: nauc_recall_at_3_diff1
value: 32.92453106017296
- type: nauc_recall_at_3_max
value: 12.154084693905993
- type: nauc_recall_at_3_std
value: 7.874826452646235
- type: nauc_recall_at_5_diff1
value: 24.83900378186163
- type: nauc_recall_at_5_max
value: 10.618063467740885
- type: nauc_recall_at_5_std
value: 7.700886647757196
- type: ndcg_at_1
value: 34.83
- type: ndcg_at_10
value: 25.395
- type: ndcg_at_100
value: 23.294
- type: ndcg_at_1000
value: 31.655
- type: ndcg_at_20
value: 23.961
- type: ndcg_at_3
value: 29.720000000000002
- type: ndcg_at_5
value: 27.687
- type: precision_at_1
value: 36.223
- type: precision_at_10
value: 18.884999999999998
- type: precision_at_100
value: 5.944
- type: precision_at_1000
value: 1.757
- type: precision_at_20
value: 14.427000000000001
- type: precision_at_3
value: 27.761000000000003
- type: precision_at_5
value: 23.839
- type: recall_at_1
value: 4.162
- type: recall_at_10
value: 12.139999999999999
- type: recall_at_100
value: 24.006
- type: recall_at_1000
value: 53.617000000000004
- type: recall_at_20
value: 15.412
- type: recall_at_3
value: 7.097
- type: recall_at_5
value: 8.933
task:
type: Retrieval
- dataset:
config: default
name: MTEB NQ-PL
revision: f171245712cf85dd4700b06bef18001578d0ca8d
split: test
type: clarin-knext/nq-pl
metrics:
- type: main_score
value: 22.603
- type: map_at_1
value: 9.948
- type: map_at_10
value: 17.845
- type: map_at_100
value: 18.959
- type: map_at_1000
value: 19.048000000000002
- type: map_at_20
value: 18.455
- type: map_at_3
value: 15.132000000000001
- type: map_at_5
value: 16.601
- type: mrr_at_1
value: 11.674391657010428
- type: mrr_at_10
value: 19.470320862991787
- type: mrr_at_100
value: 20.446877601173824
- type: mrr_at_1000
value: 20.522814299465214
- type: mrr_at_20
value: 20.008110000836435
- type: mrr_at_3
value: 16.840478949401305
- type: mrr_at_5
value: 18.30484743144072
- type: nauc_map_at_1000_diff1
value: 18.26172777698686
- type: nauc_map_at_1000_max
value: 31.552551452692246
- type: nauc_map_at_1000_std
value: 22.212928434695396
- type: nauc_map_at_100_diff1
value: 18.24688938509314
- type: nauc_map_at_100_max
value: 31.53817410525147
- type: nauc_map_at_100_std
value: 22.17330126384622
- type: nauc_map_at_10_diff1
value: 18.447992786558256
- type: nauc_map_at_10_max
value: 30.60350408504903
- type: nauc_map_at_10_std
value: 20.755467147228096
- type: nauc_map_at_1_diff1
value: 22.418576585549367
- type: nauc_map_at_1_max
value: 25.037598941208795
- type: nauc_map_at_1_std
value: 14.90958753798771
- type: nauc_map_at_20_diff1
value: 18.340722439154305
- type: nauc_map_at_20_max
value: 31.196838529305232
- type: nauc_map_at_20_std
value: 21.552426519419058
- type: nauc_map_at_3_diff1
value: 17.940689608351526
- type: nauc_map_at_3_max
value: 28.32670652769566
- type: nauc_map_at_3_std
value: 18.933678775214837
- type: nauc_map_at_5_diff1
value: 18.391656882948464
- type: nauc_map_at_5_max
value: 29.442343951102085
- type: nauc_map_at_5_std
value: 19.52289104922354
- type: nauc_mrr_at_1000_diff1
value: 17.527174397586858
- type: nauc_mrr_at_1000_max
value: 31.602488727319578
- type: nauc_mrr_at_1000_std
value: 22.93577716482068
- type: nauc_mrr_at_100_diff1
value: 17.522315985248973
- type: nauc_mrr_at_100_max
value: 31.59648863674416
- type: nauc_mrr_at_100_std
value: 22.91993463994322
- type: nauc_mrr_at_10_diff1
value: 17.576986591026188
- type: nauc_mrr_at_10_max
value: 31.004768241816667
- type: nauc_mrr_at_10_std
value: 21.965789582568895
- type: nauc_mrr_at_1_diff1
value: 21.13678758908292
- type: nauc_mrr_at_1_max
value: 26.011414032723156
- type: nauc_mrr_at_1_std
value: 16.254994138259015
- type: nauc_mrr_at_20_diff1
value: 17.53035779699737
- type: nauc_mrr_at_20_max
value: 31.388046420817066
- type: nauc_mrr_at_20_std
value: 22.542621346666966
- type: nauc_mrr_at_3_diff1
value: 17.10815729544247
- type: nauc_mrr_at_3_max
value: 29.09795467526024
- type: nauc_mrr_at_3_std
value: 20.212196884709975
- type: nauc_mrr_at_5_diff1
value: 17.508485448153106
- type: nauc_mrr_at_5_max
value: 30.051730901603225
- type: nauc_mrr_at_5_std
value: 20.812623893192008
- type: nauc_ndcg_at_1000_diff1
value: 17.42831835054262
- type: nauc_ndcg_at_1000_max
value: 36.852823471922896
- type: nauc_ndcg_at_1000_std
value: 29.5092221137645
- type: nauc_ndcg_at_100_diff1
value: 17.18145786352413
- type: nauc_ndcg_at_100_max
value: 36.68127658612261
- type: nauc_ndcg_at_100_std
value: 29.070246776560733
- type: nauc_ndcg_at_10_diff1
value: 17.650254435216336
- type: nauc_ndcg_at_10_max
value: 32.9711852272957
- type: nauc_ndcg_at_10_std
value: 23.33796255600112
- type: nauc_ndcg_at_1_diff1
value: 21.13678758908292
- type: nauc_ndcg_at_1_max
value: 26.011414032723156
- type: nauc_ndcg_at_1_std
value: 16.254994138259015
- type: nauc_ndcg_at_20_diff1
value: 17.41646581029652
- type: nauc_ndcg_at_20_max
value: 34.56260516594143
- type: nauc_ndcg_at_20_std
value: 25.560816497093715
- type: nauc_ndcg_at_3_diff1
value: 16.72984648539772
- type: nauc_ndcg_at_3_max
value: 29.165578029472623
- type: nauc_ndcg_at_3_std
value: 20.016518044505823
- type: nauc_ndcg_at_5_diff1
value: 17.531443204854625
- type: nauc_ndcg_at_5_max
value: 30.813625874766686
- type: nauc_ndcg_at_5_std
value: 20.89999189522855
- type: nauc_precision_at_1000_diff1
value: 8.023671491885642
- type: nauc_precision_at_1000_max
value: 38.57244285086915
- type: nauc_precision_at_1000_std
value: 42.75950436813853
- type: nauc_precision_at_100_diff1
value: 10.533355130718231
- type: nauc_precision_at_100_max
value: 43.7116482300273
- type: nauc_precision_at_100_std
value: 44.060964750358266
- type: nauc_precision_at_10_diff1
value: 14.972903054044348
- type: nauc_precision_at_10_max
value: 38.05240735938072
- type: nauc_precision_at_10_std
value: 29.648310668280097
- type: nauc_precision_at_1_diff1
value: 21.13678758908292
- type: nauc_precision_at_1_max
value: 26.011414032723156
- type: nauc_precision_at_1_std
value: 16.254994138259015
- type: nauc_precision_at_20_diff1
value: 13.554472011508237
- type: nauc_precision_at_20_max
value: 41.02208151220986
- type: nauc_precision_at_20_std
value: 34.85824745823735
- type: nauc_precision_at_3_diff1
value: 14.116040804511186
- type: nauc_precision_at_3_max
value: 31.682445198182435
- type: nauc_precision_at_3_std
value: 23.62076223063366
- type: nauc_precision_at_5_diff1
value: 15.243710801321306
- type: nauc_precision_at_5_max
value: 34.19548751195127
- type: nauc_precision_at_5_std
value: 24.721994359051823
- type: nauc_recall_at_1000_diff1
value: 16.364726224776085
- type: nauc_recall_at_1000_max
value: 61.50384743818951
- type: nauc_recall_at_1000_std
value: 64.05244001475157
- type: nauc_recall_at_100_diff1
value: 14.842800608772844
- type: nauc_recall_at_100_max
value: 51.09642253042941
- type: nauc_recall_at_100_std
value: 48.974514602283755
- type: nauc_recall_at_10_diff1
value: 16.295810264449052
- type: nauc_recall_at_10_max
value: 36.62230075893423
- type: nauc_recall_at_10_std
value: 27.091531221220855
- type: nauc_recall_at_1_diff1
value: 22.418576585549367
- type: nauc_recall_at_1_max
value: 25.037598941208795
- type: nauc_recall_at_1_std
value: 14.90958753798771
- type: nauc_recall_at_20_diff1
value: 15.663708298579454
- type: nauc_recall_at_20_max
value: 40.669425710354055
- type: nauc_recall_at_20_std
value: 32.92105064475319
- type: nauc_recall_at_3_diff1
value: 14.248164870616547
- type: nauc_recall_at_3_max
value: 29.788818279139523
- type: nauc_recall_at_3_std
value: 20.94235306703937
- type: nauc_recall_at_5_diff1
value: 16.12430269320114
- type: nauc_recall_at_5_max
value: 32.56849460357168
- type: nauc_recall_at_5_std
value: 22.28933193164056
- type: ndcg_at_1
value: 11.674
- type: ndcg_at_10
value: 22.603
- type: ndcg_at_100
value: 28.094
- type: ndcg_at_1000
value: 30.489
- type: ndcg_at_20
value: 24.697
- type: ndcg_at_3
value: 17.104
- type: ndcg_at_5
value: 19.708000000000002
- type: precision_at_1
value: 11.674
- type: precision_at_10
value: 4.287
- type: precision_at_100
value: 0.743
- type: precision_at_1000
value: 0.097
- type: precision_at_20
value: 2.64
- type: precision_at_3
value: 8.324
- type: precision_at_5
value: 6.483
- type: recall_at_1
value: 9.948
- type: recall_at_10
value: 35.772
- type: recall_at_100
value: 60.989000000000004
- type: recall_at_1000
value: 79.321
- type: recall_at_20
value: 43.608000000000004
- type: recall_at_3
value: 21.125
- type: recall_at_5
value: 27.211000000000002
task:
type: Retrieval
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 65.58934260063714
- type: ap
value: 74.96037603906956
- type: ap_weighted
value: 74.96037603906956
- type: f1
value: 62.46883531701779
- type: f1_weighted
value: 65.87422072252049
- type: main_score
value: 65.58934260063714
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics:
- type: cosine_accuracy
value: 97.49536178107606
- type: cosine_accuracy_threshold
value: 64.87605571746826
- type: cosine_ap
value: 99.41573082613479
- type: cosine_f1
value: 95.98811292719166
- type: cosine_f1_threshold
value: 62.816452980041504
- type: cosine_precision
value: 93.6231884057971
- type: cosine_recall
value: 98.47560975609755
- type: dot_accuracy
value: 97.49536178107606
- type: dot_accuracy_threshold
value: 64.87605571746826
- type: dot_ap
value: 99.41573082613479
- type: dot_f1
value: 95.98811292719166
- type: dot_f1_threshold
value: 62.81645894050598
- type: dot_precision
value: 93.6231884057971
- type: dot_recall
value: 98.47560975609755
- type: euclidean_accuracy
value: 97.49536178107606
- type: euclidean_accuracy_threshold
value: 83.81399512290955
- type: euclidean_ap
value: 99.41573082613479
- type: euclidean_f1
value: 95.98811292719166
- type: euclidean_f1_threshold
value: 86.23623847961426
- type: euclidean_precision
value: 93.6231884057971
- type: euclidean_recall
value: 98.47560975609755
- type: main_score
value: 99.4366325576277
- type: manhattan_accuracy
value: 97.49536178107606
- type: manhattan_accuracy_threshold
value: 1991.1922454833984
- type: manhattan_ap
value: 99.4366325576277
- type: manhattan_f1
value: 95.95202398800599
- type: manhattan_f1_threshold
value: 2005.5305480957031
- type: manhattan_precision
value: 94.3952802359882
- type: manhattan_recall
value: 97.5609756097561
- type: max_ap
value: 99.4366325576277
- type: max_f1
value: 95.98811292719166
- type: max_precision
value: 94.3952802359882
- type: max_recall
value: 98.47560975609755
- type: similarity_accuracy
value: 97.49536178107606
- type: similarity_accuracy_threshold
value: 64.87605571746826
- type: similarity_ap
value: 99.41573082613479
- type: similarity_f1
value: 95.98811292719166
- type: similarity_f1_threshold
value: 62.816452980041504
- type: similarity_precision
value: 93.6231884057971
- type: similarity_recall
value: 98.47560975609755
task:
type: PairClassification
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 73.49030470914128
- type: f1
value: 64.44026912860524
- type: f1_weighted
value: 70.76142496919624
- type: main_score
value: 73.49030470914128
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 56.1336032388664
- type: f1
value: 40.10783686862694
- type: f1_weighted
value: 52.57241968032103
- type: main_score
value: 56.1336032388664
task:
type: Classification
- dataset:
config: default
name: MTEB PPC
revision: 2c7d2df57801a591f6b1e3aaf042e7a04ec7d9f2
split: test
type: PL-MTEB/ppc-pairclassification
metrics:
- type: cosine_accuracy
value: 75.7
- type: cosine_accuracy_threshold
value: 82.45353102684021
- type: cosine_ap
value: 87.18435695095992
- type: cosine_f1
value: 80.79877112135176
- type: cosine_f1_threshold
value: 80.05339503288269
- type: cosine_precision
value: 75.35816618911176
- type: cosine_recall
value: 87.08609271523179
- type: dot_accuracy
value: 75.7
- type: dot_accuracy_threshold
value: 82.45352506637573
- type: dot_ap
value: 87.18435695095992
- type: dot_f1
value: 80.79877112135176
- type: dot_f1_threshold
value: 80.05340099334717
- type: dot_precision
value: 75.35816618911176
- type: dot_recall
value: 87.08609271523179
- type: euclidean_accuracy
value: 75.7
- type: euclidean_accuracy_threshold
value: 59.23929214477539
- type: euclidean_ap
value: 87.18435695095992
- type: euclidean_f1
value: 80.79877112135176
- type: euclidean_f1_threshold
value: 63.16102743148804
- type: euclidean_precision
value: 75.35816618911176
- type: euclidean_recall
value: 87.08609271523179
- type: main_score
value: 87.18435695095992
- type: manhattan_accuracy
value: 75.2
- type: manhattan_accuracy_threshold
value: 1350.9596824645996
- type: manhattan_ap
value: 86.98837530998256
- type: manhattan_f1
value: 80.67226890756302
- type: manhattan_f1_threshold
value: 1481.105613708496
- type: manhattan_precision
value: 74.8936170212766
- type: manhattan_recall
value: 87.41721854304636
- type: max_ap
value: 87.18435695095992
- type: max_f1
value: 80.79877112135176
- type: max_precision
value: 75.35816618911176
- type: max_recall
value: 87.41721854304636
- type: similarity_accuracy
value: 75.7
- type: similarity_accuracy_threshold
value: 82.45353102684021
- type: similarity_ap
value: 87.18435695095992
- type: similarity_f1
value: 80.79877112135176
- type: similarity_f1_threshold
value: 80.05339503288269
- type: similarity_precision
value: 75.35816618911176
- type: similarity_recall
value: 87.08609271523179
task:
type: PairClassification
- dataset:
config: default
name: MTEB Quora-PL
revision: 0be27e93455051e531182b85e85e425aba12e9d4
split: test
type: clarin-knext/quora-pl
metrics:
- type: main_score
value: 76.998
- type: map_at_1
value: 59.391000000000005
- type: map_at_10
value: 72.16900000000001
- type: map_at_100
value: 73.032
- type: map_at_1000
value: 73.06899999999999
- type: map_at_20
value: 72.714
- type: map_at_3
value: 69.15299999999999
- type: map_at_5
value: 70.987
- type: mrr_at_1
value: 68.42
- type: mrr_at_10
value: 76.16671428571387
- type: mrr_at_100
value: 76.47829123882859
- type: mrr_at_1000
value: 76.48677359771172
- type: mrr_at_20
value: 76.37813270222156
- type: mrr_at_3
value: 74.58166666666627
- type: mrr_at_5
value: 75.55716666666603
- type: nauc_map_at_1000_diff1
value: 69.61188513700026
- type: nauc_map_at_1000_max
value: 35.048941479907754
- type: nauc_map_at_1000_std
value: -20.0870344911168
- type: nauc_map_at_100_diff1
value: 69.61947691592164
- type: nauc_map_at_100_max
value: 35.033733604763725
- type: nauc_map_at_100_std
value: -20.139480957962718
- type: nauc_map_at_10_diff1
value: 69.66441777665835
- type: nauc_map_at_10_max
value: 34.37685681869468
- type: nauc_map_at_10_std
value: -21.444655375177106
- type: nauc_map_at_1_diff1
value: 73.03533775469124
- type: nauc_map_at_1_max
value: 28.361321068177816
- type: nauc_map_at_1_std
value: -23.44707326868221
- type: nauc_map_at_20_diff1
value: 69.62828183867681
- type: nauc_map_at_20_max
value: 34.81438496306748
- type: nauc_map_at_20_std
value: -20.70392332573099
- type: nauc_map_at_3_diff1
value: 69.68889489109979
- type: nauc_map_at_3_max
value: 32.46102571539603
- type: nauc_map_at_3_std
value: -23.38999293723788
- type: nauc_map_at_5_diff1
value: 69.78892096736786
- type: nauc_map_at_5_max
value: 33.538196855782914
- type: nauc_map_at_5_std
value: -22.484473756616644
- type: nauc_mrr_at_1000_diff1
value: 70.86605266935713
- type: nauc_mrr_at_1000_max
value: 39.23012904807791
- type: nauc_mrr_at_1000_std
value: -15.7945348852456
- type: nauc_mrr_at_100_diff1
value: 70.86280901414926
- type: nauc_mrr_at_100_max
value: 39.23362334217244
- type: nauc_mrr_at_100_std
value: -15.782514659328978
- type: nauc_mrr_at_10_diff1
value: 70.75755399509156
- type: nauc_mrr_at_10_max
value: 39.272495418437686
- type: nauc_mrr_at_10_std
value: -15.781106645439996
- type: nauc_mrr_at_1_diff1
value: 72.85504028372341
- type: nauc_mrr_at_1_max
value: 37.99685495245659
- type: nauc_mrr_at_1_std
value: -17.459649186396685
- type: nauc_mrr_at_20_diff1
value: 70.82261857160199
- type: nauc_mrr_at_20_max
value: 39.25660219447417
- type: nauc_mrr_at_20_std
value: -15.807365557200281
- type: nauc_mrr_at_3_diff1
value: 70.79376444174159
- type: nauc_mrr_at_3_max
value: 38.97623690163996
- type: nauc_mrr_at_3_std
value: -16.393842407269872
- type: nauc_mrr_at_5_diff1
value: 70.77811077343011
- type: nauc_mrr_at_5_max
value: 39.066661862996334
- type: nauc_mrr_at_5_std
value: -16.06138623512058
- type: nauc_ndcg_at_1000_diff1
value: 69.38432460176631
- type: nauc_ndcg_at_1000_max
value: 37.41326409294141
- type: nauc_ndcg_at_1000_std
value: -16.567106335363547
- type: nauc_ndcg_at_100_diff1
value: 69.33661321994221
- type: nauc_ndcg_at_100_max
value: 37.40443590169158
- type: nauc_ndcg_at_100_std
value: -16.35403457343329
- type: nauc_ndcg_at_10_diff1
value: 68.94489912960861
- type: nauc_ndcg_at_10_max
value: 36.2506071214321
- type: nauc_ndcg_at_10_std
value: -18.82069883161433
- type: nauc_ndcg_at_1_diff1
value: 72.72133417454367
- type: nauc_ndcg_at_1_max
value: 38.331224491505104
- type: nauc_ndcg_at_1_std
value: -17.16079633961818
- type: nauc_ndcg_at_20_diff1
value: 69.15086421535133
- type: nauc_ndcg_at_20_max
value: 36.89767798755098
- type: nauc_ndcg_at_20_std
value: -17.86958697698032
- type: nauc_ndcg_at_3_diff1
value: 68.70396833880102
- type: nauc_ndcg_at_3_max
value: 35.03484635918643
- type: nauc_ndcg_at_3_std
value: -20.273396524173844
- type: nauc_ndcg_at_5_diff1
value: 68.93056915501342
- type: nauc_ndcg_at_5_max
value: 35.38497733312458
- type: nauc_ndcg_at_5_std
value: -19.840947709262004
- type: nauc_precision_at_1000_diff1
value: -34.14718697098016
- type: nauc_precision_at_1000_max
value: 3.6293313781394763
- type: nauc_precision_at_1000_std
value: 35.18150366797986
- type: nauc_precision_at_100_diff1
value: -30.4027079095321
- type: nauc_precision_at_100_max
value: 6.809907739167871
- type: nauc_precision_at_100_std
value: 34.540918468349126
- type: nauc_precision_at_10_diff1
value: -13.640657282621275
- type: nauc_precision_at_10_max
value: 15.027602319886368
- type: nauc_precision_at_10_std
value: 19.99864404314453
- type: nauc_precision_at_1_diff1
value: 72.72133417454367
- type: nauc_precision_at_1_max
value: 38.331224491505104
- type: nauc_precision_at_1_std
value: -17.16079633961818
- type: nauc_precision_at_20_diff1
value: -22.04518115519088
- type: nauc_precision_at_20_max
value: 11.694911426947577
- type: nauc_precision_at_20_std
value: 27.0383781477066
- type: nauc_precision_at_3_diff1
value: 13.551932989888382
- type: nauc_precision_at_3_max
value: 23.434121945030604
- type: nauc_precision_at_3_std
value: 2.691762192244095
- type: nauc_precision_at_5_diff1
value: -0.530904057361583
- type: nauc_precision_at_5_max
value: 19.274513974074186
- type: nauc_precision_at_5_std
value: 11.166696219691481
- type: nauc_recall_at_1000_diff1
value: 57.69646260925434
- type: nauc_recall_at_1000_max
value: 45.515450558710825
- type: nauc_recall_at_1000_std
value: 33.3128999778333
- type: nauc_recall_at_100_diff1
value: 59.44993252237884
- type: nauc_recall_at_100_max
value: 41.168864107589144
- type: nauc_recall_at_100_std
value: 13.174320315241195
- type: nauc_recall_at_10_diff1
value: 61.74029254342778
- type: nauc_recall_at_10_max
value: 33.83885249812004
- type: nauc_recall_at_10_std
value: -17.974403452647497
- type: nauc_recall_at_1_diff1
value: 73.03533775469124
- type: nauc_recall_at_1_max
value: 28.361321068177816
- type: nauc_recall_at_1_std
value: -23.44707326868221
- type: nauc_recall_at_20_diff1
value: 60.43007696085838
- type: nauc_recall_at_20_max
value: 35.90250935704539
- type: nauc_recall_at_20_std
value: -12.539813163606686
- type: nauc_recall_at_3_diff1
value: 64.87577464206726
- type: nauc_recall_at_3_max
value: 30.325674554926348
- type: nauc_recall_at_3_std
value: -24.050361392480443
- type: nauc_recall_at_5_diff1
value: 63.71726415589154
- type: nauc_recall_at_5_max
value: 31.365393247615298
- type: nauc_recall_at_5_std
value: -22.097544116643387
- type: ndcg_at_1
value: 68.47999999999999
- type: ndcg_at_10
value: 76.998
- type: ndcg_at_100
value: 79.45400000000001
- type: ndcg_at_1000
value: 79.935
- type: ndcg_at_20
value: 78.22
- type: ndcg_at_3
value: 73.127
- type: ndcg_at_5
value: 75.13499999999999
- type: precision_at_1
value: 68.47999999999999
- type: precision_at_10
value: 11.821
- type: precision_at_100
value: 1.438
- type: precision_at_1000
value: 0.154
- type: precision_at_20
value: 6.4350000000000005
- type: precision_at_3
value: 31.96
- type: precision_at_5
value: 21.279999999999998
- type: recall_at_1
value: 59.391000000000005
- type: recall_at_10
value: 86.722
- type: recall_at_100
value: 96.143
- type: recall_at_1000
value: 99.092
- type: recall_at_20
value: 90.88300000000001
- type: recall_at_3
value: 75.81400000000001
- type: recall_at_5
value: 81.19800000000001
task:
type: Retrieval
- dataset:
config: default
name: MTEB SCIDOCS-PL
revision: 45452b03f05560207ef19149545f168e596c9337
split: test
type: clarin-knext/scidocs-pl
metrics:
- type: main_score
value: 13.038
- type: map_at_1
value: 2.785
- type: map_at_10
value: 7.24
- type: map_at_100
value: 8.751000000000001
- type: map_at_1000
value: 9.001000000000001
- type: map_at_20
value: 7.997999999999999
- type: map_at_3
value: 5.139
- type: map_at_5
value: 6.142
- type: mrr_at_1
value: 13.700000000000001
- type: mrr_at_10
value: 22.60158730158729
- type: mrr_at_100
value: 23.72791508184251
- type: mrr_at_1000
value: 23.810527360772817
- type: mrr_at_20
value: 23.241815149075197
- type: mrr_at_3
value: 19.60000000000002
- type: mrr_at_5
value: 21.224999999999998
- type: nauc_map_at_1000_diff1
value: 14.792227224924506
- type: nauc_map_at_1000_max
value: 32.301641383960124
- type: nauc_map_at_1000_std
value: 23.083104358905977
- type: nauc_map_at_100_diff1
value: 14.803863271383166
- type: nauc_map_at_100_max
value: 32.24680252823908
- type: nauc_map_at_100_std
value: 22.748086109451773
- type: nauc_map_at_10_diff1
value: 15.795155883364743
- type: nauc_map_at_10_max
value: 30.944058206585463
- type: nauc_map_at_10_std
value: 18.708078547726842
- type: nauc_map_at_1_diff1
value: 21.132398215573865
- type: nauc_map_at_1_max
value: 29.19592327750959
- type: nauc_map_at_1_std
value: 13.996493176089015
- type: nauc_map_at_20_diff1
value: 15.077937784358452
- type: nauc_map_at_20_max
value: 31.657769880494403
- type: nauc_map_at_20_std
value: 20.60155411885354
- type: nauc_map_at_3_diff1
value: 18.674857148125
- type: nauc_map_at_3_max
value: 30.693417190589383
- type: nauc_map_at_3_std
value: 16.47059364780481
- type: nauc_map_at_5_diff1
value: 16.575681500234854
- type: nauc_map_at_5_max
value: 30.082817752366125
- type: nauc_map_at_5_std
value: 16.662663606573776
- type: nauc_mrr_at_1000_diff1
value: 16.522679131105793
- type: nauc_mrr_at_1000_max
value: 27.23085993594398
- type: nauc_mrr_at_1000_std
value: 17.51392936535595
- type: nauc_mrr_at_100_diff1
value: 16.530117282112702
- type: nauc_mrr_at_100_max
value: 27.21672480216746
- type: nauc_mrr_at_100_std
value: 17.537026259653445
- type: nauc_mrr_at_10_diff1
value: 16.487235038131733
- type: nauc_mrr_at_10_max
value: 27.225450717843323
- type: nauc_mrr_at_10_std
value: 17.148693690389308
- type: nauc_mrr_at_1_diff1
value: 21.500757577390356
- type: nauc_mrr_at_1_max
value: 29.155414361425848
- type: nauc_mrr_at_1_std
value: 14.066153856101241
- type: nauc_mrr_at_20_diff1
value: 16.35982399761223
- type: nauc_mrr_at_20_max
value: 27.222179685954384
- type: nauc_mrr_at_20_std
value: 17.446818156563065
- type: nauc_mrr_at_3_diff1
value: 17.458713266374655
- type: nauc_mrr_at_3_max
value: 26.24442929157636
- type: nauc_mrr_at_3_std
value: 15.474103091301044
- type: nauc_mrr_at_5_diff1
value: 16.5126045582872
- type: nauc_mrr_at_5_max
value: 26.997210926210446
- type: nauc_mrr_at_5_std
value: 16.704873410048148
- type: nauc_ndcg_at_1000_diff1
value: 12.907773784346746
- type: nauc_ndcg_at_1000_max
value: 33.34766220820817
- type: nauc_ndcg_at_1000_std
value: 30.482401904164757
- type: nauc_ndcg_at_100_diff1
value: 13.232279099200772
- type: nauc_ndcg_at_100_max
value: 32.36971943877284
- type: nauc_ndcg_at_100_std
value: 28.885308987810603
- type: nauc_ndcg_at_10_diff1
value: 14.263079852214009
- type: nauc_ndcg_at_10_max
value: 29.756761364913597
- type: nauc_ndcg_at_10_std
value: 20.083627271228888
- type: nauc_ndcg_at_1_diff1
value: 21.500757577390356
- type: nauc_ndcg_at_1_max
value: 29.155414361425848
- type: nauc_ndcg_at_1_std
value: 14.066153856101241
- type: nauc_ndcg_at_20_diff1
value: 12.922160932922422
- type: nauc_ndcg_at_20_max
value: 30.932912450602785
- type: nauc_ndcg_at_20_std
value: 23.182250500209516
- type: nauc_ndcg_at_3_diff1
value: 17.21918294663663
- type: nauc_ndcg_at_3_max
value: 28.662429889428637
- type: nauc_ndcg_at_3_std
value: 16.8401928942087
- type: nauc_ndcg_at_5_diff1
value: 15.024056520905358
- type: nauc_ndcg_at_5_max
value: 28.783882370742838
- type: nauc_ndcg_at_5_std
value: 17.956997691110093
- type: nauc_precision_at_1000_diff1
value: 4.853325331972668
- type: nauc_precision_at_1000_max
value: 30.15694152384708
- type: nauc_precision_at_1000_std
value: 38.55692767533825
- type: nauc_precision_at_100_diff1
value: 8.113117956423707
- type: nauc_precision_at_100_max
value: 30.579313799148494
- type: nauc_precision_at_100_std
value: 37.078327072376624
- type: nauc_precision_at_10_diff1
value: 10.323074186311555
- type: nauc_precision_at_10_max
value: 29.267955393045213
- type: nauc_precision_at_10_std
value: 22.493435993948
- type: nauc_precision_at_1_diff1
value: 21.500757577390356
- type: nauc_precision_at_1_max
value: 29.155414361425848
- type: nauc_precision_at_1_std
value: 14.066153856101241
- type: nauc_precision_at_20_diff1
value: 7.296113998064506
- type: nauc_precision_at_20_max
value: 29.990871534639396
- type: nauc_precision_at_20_std
value: 27.109509055275005
- type: nauc_precision_at_3_diff1
value: 15.390787042974221
- type: nauc_precision_at_3_max
value: 28.84488812625923
- type: nauc_precision_at_3_std
value: 18.32236552735027
- type: nauc_precision_at_5_diff1
value: 11.503698423183337
- type: nauc_precision_at_5_max
value: 28.057493966763282
- type: nauc_precision_at_5_std
value: 19.611698266221076
- type: nauc_recall_at_1000_diff1
value: 5.664077565322699
- type: nauc_recall_at_1000_max
value: 30.448757418101447
- type: nauc_recall_at_1000_std
value: 39.27731310660493
- type: nauc_recall_at_100_diff1
value: 8.425909931770086
- type: nauc_recall_at_100_max
value: 30.68171063121248
- type: nauc_recall_at_100_std
value: 37.184544204955074
- type: nauc_recall_at_10_diff1
value: 10.47166367371188
- type: nauc_recall_at_10_max
value: 29.14586678828798
- type: nauc_recall_at_10_std
value: 22.111878920453464
- type: nauc_recall_at_1_diff1
value: 21.132398215573865
- type: nauc_recall_at_1_max
value: 29.19592327750959
- type: nauc_recall_at_1_std
value: 13.996493176089015
- type: nauc_recall_at_20_diff1
value: 7.4486268209490465
- type: nauc_recall_at_20_max
value: 29.759399489054555
- type: nauc_recall_at_20_std
value: 26.731517559908852
- type: nauc_recall_at_3_diff1
value: 15.400192355820627
- type: nauc_recall_at_3_max
value: 28.572542534889312
- type: nauc_recall_at_3_std
value: 17.816298041992443
- type: nauc_recall_at_5_diff1
value: 11.600069164989952
- type: nauc_recall_at_5_max
value: 27.974947140469958
- type: nauc_recall_at_5_std
value: 19.139625890938866
- type: ndcg_at_1
value: 13.700000000000001
- type: ndcg_at_10
value: 13.038
- type: ndcg_at_100
value: 19.628
- type: ndcg_at_1000
value: 24.892
- type: ndcg_at_20
value: 15.296999999999999
- type: ndcg_at_3
value: 11.828
- type: ndcg_at_5
value: 10.532
- type: precision_at_1
value: 13.700000000000001
- type: precision_at_10
value: 6.99
- type: precision_at_100
value: 1.659
- type: precision_at_1000
value: 0.294
- type: precision_at_20
value: 4.8
- type: precision_at_3
value: 11.233
- type: precision_at_5
value: 9.44
- type: recall_at_1
value: 2.785
- type: recall_at_10
value: 14.198
- type: recall_at_100
value: 33.768
- type: recall_at_1000
value: 59.821999999999996
- type: recall_at_20
value: 19.497999999999998
- type: recall_at_3
value: 6.877999999999999
- type: recall_at_5
value: 9.613
task:
type: Retrieval
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics:
- type: cosine_accuracy
value: 77.45617611088463
- type: cosine_accuracy_threshold
value: 88.67492079734802
- type: cosine_ap
value: 62.798198995025665
- type: cosine_f1
value: 60.74950690335306
- type: cosine_f1_threshold
value: 80.56387305259705
- type: cosine_precision
value: 50.256410256410255
- type: cosine_recall
value: 76.78062678062678
- type: dot_accuracy
value: 77.45617611088463
- type: dot_accuracy_threshold
value: 88.6749267578125
- type: dot_ap
value: 62.798159152951385
- type: dot_f1
value: 60.74950690335306
- type: dot_f1_threshold
value: 80.56387305259705
- type: dot_precision
value: 50.256410256410255
- type: dot_recall
value: 76.78062678062678
- type: euclidean_accuracy
value: 77.45617611088463
- type: euclidean_accuracy_threshold
value: 47.592175006866455
- type: euclidean_ap
value: 62.79814750094985
- type: euclidean_f1
value: 60.74950690335306
- type: euclidean_f1_threshold
value: 62.347614765167236
- type: euclidean_precision
value: 50.256410256410255
- type: euclidean_recall
value: 76.78062678062678
- type: main_score
value: 62.798198995025665
- type: manhattan_accuracy
value: 77.27272727272727
- type: manhattan_accuracy_threshold
value: 975.9557723999023
- type: manhattan_ap
value: 62.33701490592974
- type: manhattan_f1
value: 60.3921568627451
- type: manhattan_f1_threshold
value: 1475.3839492797852
- type: manhattan_precision
value: 49.769159741458914
- type: manhattan_recall
value: 76.78062678062678
- type: max_ap
value: 62.798198995025665
- type: max_f1
value: 60.74950690335306
- type: max_precision
value: 50.256410256410255
- type: max_recall
value: 76.78062678062678
- type: similarity_accuracy
value: 77.45617611088463
- type: similarity_accuracy_threshold
value: 88.67492079734802
- type: similarity_ap
value: 62.798198995025665
- type: similarity_f1
value: 60.74950690335306
- type: similarity_f1_threshold
value: 80.56387305259705
- type: similarity_precision
value: 50.256410256410255
- type: similarity_recall
value: 76.78062678062678
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics:
- type: cosine_pearson
value: 72.36287255590445
- type: cosine_spearman
value: 66.30825825122318
- type: euclidean_pearson
value: 68.92313932419128
- type: euclidean_spearman
value: 66.30826006369618
- type: main_score
value: 66.30825825122318
- type: manhattan_pearson
value: 68.66991543703946
- type: manhattan_spearman
value: 66.0242047018923
- type: pearson
value: 72.36287255590445
- type: spearman
value: 66.30825825122318
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 41.56662243222903
- type: cosine_spearman
value: 44.94984671604992
- type: euclidean_pearson
value: 27.88886658631932
- type: euclidean_spearman
value: 44.94984671604992
- type: main_score
value: 44.94984671604992
- type: manhattan_pearson
value: 27.467462847157798
- type: manhattan_spearman
value: 44.990280944902125
- type: pearson
value: 41.56662243222903
- type: spearman
value: 44.94984671604992
task:
type: STS
- dataset:
config: pl-en
name: MTEB STS22 (pl-en)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 78.67129157333113
- type: cosine_spearman
value: 77.17497249706467
- type: euclidean_pearson
value: 78.93527680834069
- type: euclidean_spearman
value: 77.17497249706467
- type: main_score
value: 77.17497249706467
- type: manhattan_pearson
value: 79.17117078125075
- type: manhattan_spearman
value: 77.98920639910075
- type: pearson
value: 78.67129157333113
- type: spearman
value: 77.17497249706467
task:
type: STS
- dataset:
config: de-pl
name: MTEB STS22 (de-pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 38.70216637556677
- type: cosine_spearman
value: 55.768121437825556
- type: euclidean_pearson
value: 41.389482428930485
- type: euclidean_spearman
value: 55.768121437825556
- type: main_score
value: 55.768121437825556
- type: manhattan_pearson
value: 42.7616496232802
- type: manhattan_spearman
value: 54.44397498734157
- type: pearson
value: 38.70216637556677
- type: spearman
value: 55.768121437825556
task:
type: STS
- dataset:
config: fr-pl
name: MTEB STS22 (fr-pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 83.39168516605531
- type: cosine_spearman
value: 84.51542547285167
- type: euclidean_pearson
value: 83.7912731376875
- type: euclidean_spearman
value: 84.51542547285167
- type: main_score
value: 84.51542547285167
- type: manhattan_pearson
value: 82.28209868239296
- type: manhattan_spearman
value: 84.51542547285167
- type: pearson
value: 83.39168516605531
- type: spearman
value: 84.51542547285167
task:
type: STS
- dataset:
config: default
name: MTEB SciFact-PL
revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e
split: test
type: clarin-knext/scifact-pl
metrics:
- type: main_score
value: 57.827
- type: map_at_1
value: 45.083
- type: map_at_10
value: 53.83
- type: map_at_100
value: 54.577
- type: map_at_1000
value: 54.623
- type: map_at_20
value: 54.211
- type: map_at_3
value: 51.304
- type: map_at_5
value: 52.851000000000006
- type: mrr_at_1
value: 47.333333333333336
- type: mrr_at_10
value: 55.07949735449736
- type: mrr_at_100
value: 55.710506477168956
- type: mrr_at_1000
value: 55.748401782889445
- type: mrr_at_20
value: 55.409548920578345
- type: mrr_at_3
value: 53.055555555555564
- type: mrr_at_5
value: 54.422222222222224
- type: nauc_map_at_1000_diff1
value: 56.75114793396484
- type: nauc_map_at_1000_max
value: 45.557101118136366
- type: nauc_map_at_1000_std
value: 21.122840914857495
- type: nauc_map_at_100_diff1
value: 56.738747688350024
- type: nauc_map_at_100_max
value: 45.55491958094813
- type: nauc_map_at_100_std
value: 21.12266632389643
- type: nauc_map_at_10_diff1
value: 56.926041668030855
- type: nauc_map_at_10_max
value: 45.2382783831653
- type: nauc_map_at_10_std
value: 20.922255034211766
- type: nauc_map_at_1_diff1
value: 60.98838903764472
- type: nauc_map_at_1_max
value: 43.22668392792625
- type: nauc_map_at_1_std
value: 17.29004046426385
- type: nauc_map_at_20_diff1
value: 56.848541422173795
- type: nauc_map_at_20_max
value: 45.59725008207042
- type: nauc_map_at_20_std
value: 21.177613569735655
- type: nauc_map_at_3_diff1
value: 58.23995403356206
- type: nauc_map_at_3_max
value: 44.76675994666382
- type: nauc_map_at_3_std
value: 18.839553176727783
- type: nauc_map_at_5_diff1
value: 56.99049510687553
- type: nauc_map_at_5_max
value: 44.71681163401595
- type: nauc_map_at_5_std
value: 19.453824672770455
- type: nauc_mrr_at_1000_diff1
value: 57.4953870158563
- type: nauc_mrr_at_1000_max
value: 46.79551970939633
- type: nauc_mrr_at_1000_std
value: 23.71693511404122
- type: nauc_mrr_at_100_diff1
value: 57.482272276265235
- type: nauc_mrr_at_100_max
value: 46.79105970491737
- type: nauc_mrr_at_100_std
value: 23.705546007429124
- type: nauc_mrr_at_10_diff1
value: 57.630280158288926
- type: nauc_mrr_at_10_max
value: 46.646619843739465
- type: nauc_mrr_at_10_std
value: 23.642389853421577
- type: nauc_mrr_at_1_diff1
value: 61.903420841877356
- type: nauc_mrr_at_1_max
value: 46.95318894276891
- type: nauc_mrr_at_1_std
value: 23.19343113872584
- type: nauc_mrr_at_20_diff1
value: 57.574039026825815
- type: nauc_mrr_at_20_max
value: 46.825490821786545
- type: nauc_mrr_at_20_std
value: 23.747309823079746
- type: nauc_mrr_at_3_diff1
value: 58.634726160884576
- type: nauc_mrr_at_3_max
value: 46.68634348254961
- type: nauc_mrr_at_3_std
value: 22.9939558189414
- type: nauc_mrr_at_5_diff1
value: 57.43527378441584
- type: nauc_mrr_at_5_max
value: 46.82233838319152
- type: nauc_mrr_at_5_std
value: 23.407766325712398
- type: nauc_ndcg_at_1000_diff1
value: 55.303289773692676
- type: nauc_ndcg_at_1000_max
value: 46.703610191621145
- type: nauc_ndcg_at_1000_std
value: 23.57730795756405
- type: nauc_ndcg_at_100_diff1
value: 54.38572710219233
- type: nauc_ndcg_at_100_max
value: 46.37493158024567
- type: nauc_ndcg_at_100_std
value: 23.314588126884324
- type: nauc_ndcg_at_10_diff1
value: 55.21850729666301
- type: nauc_ndcg_at_10_max
value: 45.58511788479343
- type: nauc_ndcg_at_10_std
value: 22.8531636189787
- type: nauc_ndcg_at_1_diff1
value: 61.903420841877356
- type: nauc_ndcg_at_1_max
value: 46.95318894276891
- type: nauc_ndcg_at_1_std
value: 23.19343113872584
- type: nauc_ndcg_at_20_diff1
value: 54.96359325487391
- type: nauc_ndcg_at_20_max
value: 46.525071413272975
- type: nauc_ndcg_at_20_std
value: 23.416022310286206
- type: nauc_ndcg_at_3_diff1
value: 57.33303538179732
- type: nauc_ndcg_at_3_max
value: 45.60081314229553
- type: nauc_ndcg_at_3_std
value: 20.311802683707644
- type: nauc_ndcg_at_5_diff1
value: 55.09370926297347
- type: nauc_ndcg_at_5_max
value: 45.11375173156922
- type: nauc_ndcg_at_5_std
value: 20.676971796560167
- type: nauc_precision_at_1000_diff1
value: -8.792997673585157
- type: nauc_precision_at_1000_max
value: 26.985804617599456
- type: nauc_precision_at_1000_std
value: 38.32145829157333
- type: nauc_precision_at_100_diff1
value: 3.448830291824138
- type: nauc_precision_at_100_max
value: 33.3751058104728
- type: nauc_precision_at_100_std
value: 36.07155861781976
- type: nauc_precision_at_10_diff1
value: 27.905538531066256
- type: nauc_precision_at_10_max
value: 41.57287780821485
- type: nauc_precision_at_10_std
value: 36.11165069712307
- type: nauc_precision_at_1_diff1
value: 61.903420841877356
- type: nauc_precision_at_1_max
value: 46.95318894276891
- type: nauc_precision_at_1_std
value: 23.19343113872584
- type: nauc_precision_at_20_diff1
value: 21.945937631553438
- type: nauc_precision_at_20_max
value: 42.8503772546226
- type: nauc_precision_at_20_std
value: 37.54978789546971
- type: nauc_precision_at_3_diff1
value: 44.695453949094684
- type: nauc_precision_at_3_max
value: 46.25836394647075
- type: nauc_precision_at_3_std
value: 25.448947126738393
- type: nauc_precision_at_5_diff1
value: 34.21739846774853
- type: nauc_precision_at_5_max
value: 43.36271521542134
- type: nauc_precision_at_5_std
value: 28.863168300518954
- type: nauc_recall_at_1000_diff1
value: 50.866272434900374
- type: nauc_recall_at_1000_max
value: 77.90745928000882
- type: nauc_recall_at_1000_std
value: 82.21288515406151
- type: nauc_recall_at_100_diff1
value: 35.307317119527056
- type: nauc_recall_at_100_max
value: 46.922433638935956
- type: nauc_recall_at_100_std
value: 31.814942138236262
- type: nauc_recall_at_10_diff1
value: 47.8121533413515
- type: nauc_recall_at_10_max
value: 43.310991487523246
- type: nauc_recall_at_10_std
value: 25.903501909176917
- type: nauc_recall_at_1_diff1
value: 60.98838903764472
- type: nauc_recall_at_1_max
value: 43.22668392792625
- type: nauc_recall_at_1_std
value: 17.29004046426385
- type: nauc_recall_at_20_diff1
value: 45.83142943406739
- type: nauc_recall_at_20_max
value: 46.73030342771932
- type: nauc_recall_at_20_std
value: 28.07957120284036
- type: nauc_recall_at_3_diff1
value: 54.187633219194495
- type: nauc_recall_at_3_max
value: 43.672283626861066
- type: nauc_recall_at_3_std
value: 18.136469354114993
- type: nauc_recall_at_5_diff1
value: 47.4292849527445
- type: nauc_recall_at_5_max
value: 42.22276792180875
- type: nauc_recall_at_5_std
value: 19.22371392434811
- type: ndcg_at_1
value: 47.333
- type: ndcg_at_10
value: 57.827
- type: ndcg_at_100
value: 61.551
- type: ndcg_at_1000
value: 62.865
- type: ndcg_at_20
value: 59.03699999999999
- type: ndcg_at_3
value: 53.554
- type: ndcg_at_5
value: 55.949000000000005
- type: precision_at_1
value: 47.333
- type: precision_at_10
value: 7.767
- type: precision_at_100
value: 0.987
- type: precision_at_1000
value: 0.11
- type: precision_at_20
value: 4.167
- type: precision_at_3
value: 21.111
- type: precision_at_5
value: 14.133000000000001
- type: recall_at_1
value: 45.083
- type: recall_at_10
value: 68.667
- type: recall_at_100
value: 86.433
- type: recall_at_1000
value: 97.0
- type: recall_at_20
value: 73.078
- type: recall_at_3
value: 57.477999999999994
- type: recall_at_5
value: 63.322
task:
type: Retrieval
- dataset:
config: default
name: MTEB TRECCOVID-PL
revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd
split: test
type: clarin-knext/trec-covid-pl
metrics:
- type: main_score
value: 56.919
- type: map_at_1
value: 0.17600000000000002
- type: map_at_10
value: 1.352
- type: map_at_100
value: 7.253
- type: map_at_1000
value: 18.698
- type: map_at_20
value: 2.313
- type: map_at_3
value: 0.496
- type: map_at_5
value: 0.775
- type: mrr_at_1
value: 68.0
- type: mrr_at_10
value: 80.26904761904761
- type: mrr_at_100
value: 80.26904761904761
- type: mrr_at_1000
value: 80.26904761904761
- type: mrr_at_20
value: 80.26904761904761
- type: mrr_at_3
value: 78.33333333333333
- type: mrr_at_5
value: 79.73333333333332
- type: nauc_map_at_1000_diff1
value: 6.574463369141221
- type: nauc_map_at_1000_max
value: 53.38255229751684
- type: nauc_map_at_1000_std
value: 80.05902957099651
- type: nauc_map_at_100_diff1
value: 11.446821053406707
- type: nauc_map_at_100_max
value: 44.68607496071329
- type: nauc_map_at_100_std
value: 72.78356846807002
- type: nauc_map_at_10_diff1
value: 19.670014556837902
- type: nauc_map_at_10_max
value: 34.81097303843686
- type: nauc_map_at_10_std
value: 33.674183618423335
- type: nauc_map_at_1_diff1
value: 21.506439684761883
- type: nauc_map_at_1_max
value: 28.484715735575577
- type: nauc_map_at_1_std
value: 9.63153171871658
- type: nauc_map_at_20_diff1
value: 21.0792619485704
- type: nauc_map_at_20_max
value: 42.16963284469341
- type: nauc_map_at_20_std
value: 40.700515917035524
- type: nauc_map_at_3_diff1
value: 26.981672835550295
- type: nauc_map_at_3_max
value: 32.974693063997506
- type: nauc_map_at_3_std
value: 16.6022898528941
- type: nauc_map_at_5_diff1
value: 27.87549872058613
- type: nauc_map_at_5_max
value: 33.80977925406638
- type: nauc_map_at_5_std
value: 19.902109058910966
- type: nauc_mrr_at_1000_diff1
value: 12.46327367923585
- type: nauc_mrr_at_1000_max
value: 36.671369778214725
- type: nauc_mrr_at_1000_std
value: 29.65039484236962
- type: nauc_mrr_at_100_diff1
value: 12.46327367923585
- type: nauc_mrr_at_100_max
value: 36.671369778214725
- type: nauc_mrr_at_100_std
value: 29.65039484236962
- type: nauc_mrr_at_10_diff1
value: 12.46327367923585
- type: nauc_mrr_at_10_max
value: 36.671369778214725
- type: nauc_mrr_at_10_std
value: 29.65039484236962
- type: nauc_mrr_at_1_diff1
value: 6.319535622970017
- type: nauc_mrr_at_1_max
value: 33.71225209038767
- type: nauc_mrr_at_1_std
value: 25.834427475640904
- type: nauc_mrr_at_20_diff1
value: 12.46327367923585
- type: nauc_mrr_at_20_max
value: 36.671369778214725
- type: nauc_mrr_at_20_std
value: 29.65039484236962
- type: nauc_mrr_at_3_diff1
value: 14.027551353113887
- type: nauc_mrr_at_3_max
value: 38.329801108575204
- type: nauc_mrr_at_3_std
value: 29.922562764916822
- type: nauc_mrr_at_5_diff1
value: 14.272859057946812
- type: nauc_mrr_at_5_max
value: 36.26521327614547
- type: nauc_mrr_at_5_std
value: 30.35143151694706
- type: nauc_ndcg_at_1000_diff1
value: 11.430252629811264
- type: nauc_ndcg_at_1000_max
value: 54.72660044236807
- type: nauc_ndcg_at_1000_std
value: 78.30081415388416
- type: nauc_ndcg_at_100_diff1
value: 0.3033147120555255
- type: nauc_ndcg_at_100_max
value: 44.79981966050289
- type: nauc_ndcg_at_100_std
value: 70.8722962407257
- type: nauc_ndcg_at_10_diff1
value: 13.708493191967316
- type: nauc_ndcg_at_10_max
value: 45.58714259949
- type: nauc_ndcg_at_10_std
value: 54.25312608750681
- type: nauc_ndcg_at_1_diff1
value: 14.13764957725658
- type: nauc_ndcg_at_1_max
value: 35.89238137772783
- type: nauc_ndcg_at_1_std
value: 26.159271864845252
- type: nauc_ndcg_at_20_diff1
value: 10.821994469339833
- type: nauc_ndcg_at_20_max
value: 49.655194522856874
- type: nauc_ndcg_at_20_std
value: 59.38126671218269
- type: nauc_ndcg_at_3_diff1
value: 21.715565312196077
- type: nauc_ndcg_at_3_max
value: 43.75654188258407
- type: nauc_ndcg_at_3_std
value: 43.06565426451109
- type: nauc_ndcg_at_5_diff1
value: 23.655719788636784
- type: nauc_ndcg_at_5_max
value: 43.918620576813254
- type: nauc_ndcg_at_5_std
value: 43.25044045865146
- type: nauc_precision_at_1000_diff1
value: -7.801822177721561
- type: nauc_precision_at_1000_max
value: 39.258818089435316
- type: nauc_precision_at_1000_std
value: 51.66205821260089
- type: nauc_precision_at_100_diff1
value: -4.119704756180739
- type: nauc_precision_at_100_max
value: 39.712338903322255
- type: nauc_precision_at_100_std
value: 72.21641244608408
- type: nauc_precision_at_10_diff1
value: 8.444233068337487
- type: nauc_precision_at_10_max
value: 42.4676899985165
- type: nauc_precision_at_10_std
value: 56.826333196617604
- type: nauc_precision_at_1_diff1
value: 6.319535622970017
- type: nauc_precision_at_1_max
value: 33.71225209038767
- type: nauc_precision_at_1_std
value: 25.834427475640904
- type: nauc_precision_at_20_diff1
value: 5.9351451055270665
- type: nauc_precision_at_20_max
value: 48.44119310018816
- type: nauc_precision_at_20_std
value: 59.5595391474413
- type: nauc_precision_at_3_diff1
value: 20.49183589553138
- type: nauc_precision_at_3_max
value: 43.97209215954164
- type: nauc_precision_at_3_std
value: 43.38846811953682
- type: nauc_precision_at_5_diff1
value: 23.91193541491969
- type: nauc_precision_at_5_max
value: 42.89037965109586
- type: nauc_precision_at_5_std
value: 43.85307223071737
- type: nauc_recall_at_1000_diff1
value: 14.852243091307962
- type: nauc_recall_at_1000_max
value: 52.716143146467246
- type: nauc_recall_at_1000_std
value: 75.96395414412834
- type: nauc_recall_at_100_diff1
value: 15.714854209882853
- type: nauc_recall_at_100_max
value: 36.02809107498271
- type: nauc_recall_at_100_std
value: 69.13542905710189
- type: nauc_recall_at_10_diff1
value: 21.595214483052263
- type: nauc_recall_at_10_max
value: 30.858824962274056
- type: nauc_recall_at_10_std
value: 32.41949976903557
- type: nauc_recall_at_1_diff1
value: 21.506439684761883
- type: nauc_recall_at_1_max
value: 28.484715735575577
- type: nauc_recall_at_1_std
value: 9.63153171871658
- type: nauc_recall_at_20_diff1
value: 26.088109678326145
- type: nauc_recall_at_20_max
value: 39.30741232084537
- type: nauc_recall_at_20_std
value: 35.63530214277264
- type: nauc_recall_at_3_diff1
value: 30.069120349407143
- type: nauc_recall_at_3_max
value: 30.61753190304264
- type: nauc_recall_at_3_std
value: 18.336355866759682
- type: nauc_recall_at_5_diff1
value: 31.512613211529615
- type: nauc_recall_at_5_max
value: 30.43538310477602
- type: nauc_recall_at_5_std
value: 19.67467281491149
- type: ndcg_at_1
value: 61.0
- type: ndcg_at_10
value: 56.919
- type: ndcg_at_100
value: 44.4
- type: ndcg_at_1000
value: 42.588
- type: ndcg_at_20
value: 54.266999999999996
- type: ndcg_at_3
value: 58.765
- type: ndcg_at_5
value: 58.553
- type: precision_at_1
value: 68.0
- type: precision_at_10
value: 62.0
- type: precision_at_100
value: 45.839999999999996
- type: precision_at_1000
value: 19.31
- type: precision_at_20
value: 58.199999999999996
- type: precision_at_3
value: 66.667
- type: precision_at_5
value: 64.8
- type: recall_at_1
value: 0.17600000000000002
- type: recall_at_10
value: 1.637
- type: recall_at_100
value: 10.764999999999999
- type: recall_at_1000
value: 40.766999999999996
- type: recall_at_20
value: 2.983
- type: recall_at_3
value: 0.5519999999999999
- type: recall_at_5
value: 0.8829999999999999
task:
type: Retrieval
- dataset:
config: default
name: MTEB CEDRClassification
revision: c0ba03d058e3e1b2f3fd20518875a4563dd12db4
split: test
type: ai-forever/cedr-classification
metrics:
- type: accuracy
value: 42.15727948990436
- type: f1
value: 39.09194730362947
- type: lrap
value: 71.07199787460253
- type: main_score
value: 42.15727948990436
task:
type: MultilabelClassification
- dataset:
config: default
name: MTEB GeoreviewClassification
revision: 3765c0d1de6b7d264bc459433c45e5a75513839c
split: test
type: ai-forever/georeview-classification
metrics:
- type: accuracy
value: 47.685546875
- type: f1
value: 42.201867616479085
- type: f1_weighted
value: 42.20127250813618
- type: main_score
value: 47.685546875
task:
type: Classification
- dataset:
config: default
name: MTEB GeoreviewClusteringP2P
revision: 97a313c8fc85b47f13f33e7e9a95c1ad888c7fec
split: test
type: ai-forever/georeview-clustering-p2p
metrics:
- type: main_score
value: 63.39849666467603
- type: v_measure
value: 63.39849666467603
- type: v_measure_std
value: 0.4433669974776044
task:
type: Clustering
- dataset:
config: default
name: MTEB HeadlineClassification
revision: 2fe05ee6b5832cda29f2ef7aaad7b7fe6a3609eb
split: test
type: ai-forever/headline-classification
metrics:
- type: accuracy
value: 83.45703125
- type: f1
value: 83.44147121320216
- type: f1_weighted
value: 83.43953816781061
- type: main_score
value: 83.45703125
task:
type: Classification
- dataset:
config: default
name: MTEB InappropriatenessClassification
revision: 601651fdc45ef243751676e62dd7a19f491c0285
split: test
type: ai-forever/inappropriateness-classification
metrics:
- type: accuracy
value: 61.318359375
- type: ap
value: 57.103049962056815
- type: ap_weighted
value: 57.103049962056815
- type: f1
value: 60.69364450664112
- type: f1_weighted
value: 60.69364450664112
- type: main_score
value: 61.318359375
task:
type: Classification
- dataset:
config: default
name: MTEB KinopoiskClassification
revision: 5911f26666ac11af46cb9c6849d0dc80a378af24
split: test
type: ai-forever/kinopoisk-sentiment-classification
metrics:
- type: accuracy
value: 59.040000000000006
- type: f1
value: 55.63433742720159
- type: f1_weighted
value: 55.63433742720159
- type: main_score
value: 59.040000000000006
task:
type: Classification
- dataset:
config: ru
name: MTEB MIRACLReranking (ru)
revision: 6d1962c527217f8927fca80f890f14f36b2802af
split: dev
type: miracl/mmteb-miracl-reranking
metrics:
- type: MAP@1(MIRACL)
value: 29.729
- type: MAP@10(MIRACL)
value: 48.713
- type: MAP@100(MIRACL)
value: 50.792
- type: MAP@1000(MIRACL)
value: 50.792
- type: MAP@20(MIRACL)
value: 50.197
- type: MAP@3(MIRACL)
value: 41.8
- type: MAP@5(MIRACL)
value: 45.706
- type: NDCG@1(MIRACL)
value: 49.158
- type: NDCG@10(MIRACL)
value: 56.550999999999995
- type: NDCG@100(MIRACL)
value: 60.829
- type: NDCG@1000(MIRACL)
value: 60.829
- type: NDCG@20(MIRACL)
value: 59.229
- type: NDCG@3(MIRACL)
value: 50.397000000000006
- type: NDCG@5(MIRACL)
value: 53.105000000000004
- type: P@1(MIRACL)
value: 49.158
- type: P@10(MIRACL)
value: 14.908
- type: P@100(MIRACL)
value: 1.9529999999999998
- type: P@1000(MIRACL)
value: 0.19499999999999998
- type: P@20(MIRACL)
value: 8.753
- type: P@3(MIRACL)
value: 31.061
- type: P@5(MIRACL)
value: 23.785
- type: Recall@1(MIRACL)
value: 29.729
- type: Recall@10(MIRACL)
value: 67.223
- type: Recall@100(MIRACL)
value: 79.952
- type: Recall@1000(MIRACL)
value: 79.952
- type: Recall@20(MIRACL)
value: 74.417
- type: Recall@3(MIRACL)
value: 49.073
- type: Recall@5(MIRACL)
value: 58.094
- type: main_score
value: 56.550999999999995
- type: nAUC_MAP@1000_diff1(MIRACL)
value: 19.222716664871324
- type: nAUC_MAP@1000_max(MIRACL)
value: 28.91315309273525
- type: nAUC_MAP@1000_std(MIRACL)
value: 15.773770301363973
- type: nAUC_MAP@100_diff1(MIRACL)
value: 19.222716664871324
- type: nAUC_MAP@100_max(MIRACL)
value: 28.91315309273525
- type: nAUC_MAP@100_std(MIRACL)
value: 15.773770301363973
- type: nAUC_MAP@10_diff1(MIRACL)
value: 21.16716217839532
- type: nAUC_MAP@10_max(MIRACL)
value: 26.58073750952478
- type: nAUC_MAP@10_std(MIRACL)
value: 14.98546699381452
- type: nAUC_MAP@1_diff1(MIRACL)
value: 37.50928508734578
- type: nAUC_MAP@1_max(MIRACL)
value: 13.158704351998995
- type: nAUC_MAP@1_std(MIRACL)
value: 4.422878276220556
- type: nAUC_MAP@20_diff1(MIRACL)
value: 19.951045759045467
- type: nAUC_MAP@20_max(MIRACL)
value: 28.25165991244302
- type: nAUC_MAP@20_std(MIRACL)
value: 15.850363419877105
- type: nAUC_MAP@3_diff1(MIRACL)
value: 27.774164479669988
- type: nAUC_MAP@3_max(MIRACL)
value: 20.738889611307496
- type: nAUC_MAP@3_std(MIRACL)
value: 9.22491952318088
- type: nAUC_MAP@5_diff1(MIRACL)
value: 23.86089217267443
- type: nAUC_MAP@5_max(MIRACL)
value: 23.19878810494586
- type: nAUC_MAP@5_std(MIRACL)
value: 11.851875808858123
- type: nAUC_NDCG@1000_diff1(MIRACL)
value: 9.459016218726891
- type: nAUC_NDCG@1000_max(MIRACL)
value: 38.018030050210896
- type: nAUC_NDCG@1000_std(MIRACL)
value: 20.555997574199246
- type: nAUC_NDCG@100_diff1(MIRACL)
value: 9.459016218726891
- type: nAUC_NDCG@100_max(MIRACL)
value: 38.018030050210896
- type: nAUC_NDCG@100_std(MIRACL)
value: 20.555997574199246
- type: nAUC_NDCG@10_diff1(MIRACL)
value: 14.2494195957649
- type: nAUC_NDCG@10_max(MIRACL)
value: 32.87676976986289
- type: nAUC_NDCG@10_std(MIRACL)
value: 19.469852065776976
- type: nAUC_NDCG@1_diff1(MIRACL)
value: 23.312659021070818
- type: nAUC_NDCG@1_max(MIRACL)
value: 31.554119919664593
- type: nAUC_NDCG@1_std(MIRACL)
value: 17.533789813864466
- type: nAUC_NDCG@20_diff1(MIRACL)
value: 11.694064829915717
- type: nAUC_NDCG@20_max(MIRACL)
value: 36.12122229242797
- type: nAUC_NDCG@20_std(MIRACL)
value: 20.886325245384313
- type: nAUC_NDCG@3_diff1(MIRACL)
value: 19.70964037059834
- type: nAUC_NDCG@3_max(MIRACL)
value: 28.271224651385758
- type: nAUC_NDCG@3_std(MIRACL)
value: 14.182889320426757
- type: nAUC_NDCG@5_diff1(MIRACL)
value: 17.143482434537635
- type: nAUC_NDCG@5_max(MIRACL)
value: 28.911278684121744
- type: nAUC_NDCG@5_std(MIRACL)
value: 15.83019582479379
- type: nAUC_P@1000_diff1(MIRACL)
value: -28.806220159210838
- type: nAUC_P@1000_max(MIRACL)
value: 30.19137414854295
- type: nAUC_P@1000_std(MIRACL)
value: 15.577217138606922
- type: nAUC_P@100_diff1(MIRACL)
value: -28.8062201592108
- type: nAUC_P@100_max(MIRACL)
value: 30.191374148543016
- type: nAUC_P@100_std(MIRACL)
value: 15.577217138606963
- type: nAUC_P@10_diff1(MIRACL)
value: -23.950963396253567
- type: nAUC_P@10_max(MIRACL)
value: 32.31620562041691
- type: nAUC_P@10_std(MIRACL)
value: 22.76652888514141
- type: nAUC_P@1_diff1(MIRACL)
value: 23.312659021070818
- type: nAUC_P@1_max(MIRACL)
value: 31.554119919664593
- type: nAUC_P@1_std(MIRACL)
value: 17.533789813864466
- type: nAUC_P@20_diff1(MIRACL)
value: -26.522109242426172
- type: nAUC_P@20_max(MIRACL)
value: 31.490097667881027
- type: nAUC_P@20_std(MIRACL)
value: 20.51757471839622
- type: nAUC_P@3_diff1(MIRACL)
value: -8.494670555442749
- type: nAUC_P@3_max(MIRACL)
value: 33.197306356212295
- type: nAUC_P@3_std(MIRACL)
value: 18.96447162170764
- type: nAUC_P@5_diff1(MIRACL)
value: -19.15325386641154
- type: nAUC_P@5_max(MIRACL)
value: 31.846463690427683
- type: nAUC_P@5_std(MIRACL)
value: 20.914296846825028
- type: nAUC_Recall@1000_diff1(MIRACL)
value: -22.62644777038629
- type: nAUC_Recall@1000_max(MIRACL)
value: 63.09417027858301
- type: nAUC_Recall@1000_std(MIRACL)
value: 31.96936126619333
- type: nAUC_Recall@100_diff1(MIRACL)
value: -22.62644777038629
- type: nAUC_Recall@100_max(MIRACL)
value: 63.09417027858301
- type: nAUC_Recall@100_std(MIRACL)
value: 31.96936126619333
- type: nAUC_Recall@10_diff1(MIRACL)
value: 1.389536667314163
- type: nAUC_Recall@10_max(MIRACL)
value: 36.80168430587649
- type: nAUC_Recall@10_std(MIRACL)
value: 24.6096121100626
- type: nAUC_Recall@1_diff1(MIRACL)
value: 37.50928508734578
- type: nAUC_Recall@1_max(MIRACL)
value: 13.158704351998995
- type: nAUC_Recall@1_std(MIRACL)
value: 4.422878276220556
- type: nAUC_Recall@20_diff1(MIRACL)
value: -8.586661617880036
- type: nAUC_Recall@20_max(MIRACL)
value: 48.977640900606715
- type: nAUC_Recall@20_std(MIRACL)
value: 30.787733282193763
- type: nAUC_Recall@3_diff1(MIRACL)
value: 20.85452801657472
- type: nAUC_Recall@3_max(MIRACL)
value: 20.457796008702196
- type: nAUC_Recall@3_std(MIRACL)
value: 10.422494162066547
- type: nAUC_Recall@5_diff1(MIRACL)
value: 11.294860119295114
- type: nAUC_Recall@5_max(MIRACL)
value: 24.55554040640634
- type: nAUC_Recall@5_std(MIRACL)
value: 15.07523755007524
task:
type: Reranking
- dataset:
config: ru
name: MTEB MIRACLRetrieval (ru)
revision: main
split: dev
type: miracl/mmteb-miracl
metrics:
- type: main_score
value: 53.33
- type: map_at_1
value: 23.51
- type: map_at_10
value: 42.506
- type: map_at_100
value: 45.727000000000004
- type: map_at_1000
value: 45.824
- type: map_at_20
value: 44.482
- type: map_at_3
value: 34.903
- type: map_at_5
value: 38.924
- type: mrr_at_1
value: 47.52396166134185
- type: mrr_at_10
value: 60.53929585678796
- type: mrr_at_100
value: 61.08405013111772
- type: mrr_at_1000
value: 61.090960329457786
- type: mrr_at_20
value: 60.942355859942886
- type: mrr_at_3
value: 57.21512247071355
- type: mrr_at_5
value: 59.423588924387715
- type: nauc_map_at_1000_diff1
value: 27.9258851452338
- type: nauc_map_at_1000_max
value: 23.91526202439492
- type: nauc_map_at_1000_std
value: 1.9886186316328294
- type: nauc_map_at_100_diff1
value: 27.950443502043935
- type: nauc_map_at_100_max
value: 23.91952747895155
- type: nauc_map_at_100_std
value: 1.9828664117240875
- type: nauc_map_at_10_diff1
value: 28.591900542084257
- type: nauc_map_at_10_max
value: 22.26715273276218
- type: nauc_map_at_10_std
value: -0.2905582006620209
- type: nauc_map_at_1_diff1
value: 36.29159533442582
- type: nauc_map_at_1_max
value: 14.017798723971604
- type: nauc_map_at_1_std
value: -4.135744714942541
- type: nauc_map_at_20_diff1
value: 28.227642002703888
- type: nauc_map_at_20_max
value: 23.31288716904143
- type: nauc_map_at_20_std
value: 0.8608305708684871
- type: nauc_map_at_3_diff1
value: 31.25854158298961
- type: nauc_map_at_3_max
value: 19.94828898205679
- type: nauc_map_at_3_std
value: -3.055128116323982
- type: nauc_map_at_5_diff1
value: 29.569541485869138
- type: nauc_map_at_5_max
value: 20.295566102579233
- type: nauc_map_at_5_std
value: -2.0623859574064496
- type: nauc_mrr_at_1000_diff1
value: 27.361661005387717
- type: nauc_mrr_at_1000_max
value: 29.835566057491185
- type: nauc_mrr_at_1000_std
value: 9.18992468804867
- type: nauc_mrr_at_100_diff1
value: 27.364549933483367
- type: nauc_mrr_at_100_max
value: 29.841000191685662
- type: nauc_mrr_at_100_std
value: 9.201936238611633
- type: nauc_mrr_at_10_diff1
value: 27.091315668645876
- type: nauc_mrr_at_10_max
value: 30.083804137944814
- type: nauc_mrr_at_10_std
value: 9.295940302357145
- type: nauc_mrr_at_1_diff1
value: 30.096520602983773
- type: nauc_mrr_at_1_max
value: 25.92117667316542
- type: nauc_mrr_at_1_std
value: 6.628159094331555
- type: nauc_mrr_at_20_diff1
value: 27.26907735403706
- type: nauc_mrr_at_20_max
value: 29.91703823542895
- type: nauc_mrr_at_20_std
value: 9.220168448561815
- type: nauc_mrr_at_3_diff1
value: 27.132416524688672
- type: nauc_mrr_at_3_max
value: 29.879006809416147
- type: nauc_mrr_at_3_std
value: 8.495778638777473
- type: nauc_mrr_at_5_diff1
value: 27.164544736044938
- type: nauc_mrr_at_5_max
value: 29.756896839148844
- type: nauc_mrr_at_5_std
value: 8.697141135185072
- type: nauc_ndcg_at_1000_diff1
value: 25.711789502779325
- type: nauc_ndcg_at_1000_max
value: 28.742258668080943
- type: nauc_ndcg_at_1000_std
value: 8.197781962071534
- type: nauc_ndcg_at_100_diff1
value: 25.844850932804846
- type: nauc_ndcg_at_100_max
value: 29.043525248699453
- type: nauc_ndcg_at_100_std
value: 8.810501750069859
- type: nauc_ndcg_at_10_diff1
value: 26.47161747010468
- type: nauc_ndcg_at_10_max
value: 25.36709975989015
- type: nauc_ndcg_at_10_std
value: 3.070985924814878
- type: nauc_ndcg_at_1_diff1
value: 30.096520602983773
- type: nauc_ndcg_at_1_max
value: 25.92117667316542
- type: nauc_ndcg_at_1_std
value: 6.628159094331555
- type: nauc_ndcg_at_20_diff1
value: 26.329559310197325
- type: nauc_ndcg_at_20_max
value: 27.252374736353723
- type: nauc_ndcg_at_20_std
value: 5.279499913033636
- type: nauc_ndcg_at_3_diff1
value: 26.382469083855774
- type: nauc_ndcg_at_3_max
value: 25.667817557434446
- type: nauc_ndcg_at_3_std
value: 2.722781380568278
- type: nauc_ndcg_at_5_diff1
value: 26.63587958392066
- type: nauc_ndcg_at_5_max
value: 24.012746599673562
- type: nauc_ndcg_at_5_std
value: 1.875533584617588
- type: nauc_precision_at_1000_diff1
value: -16.886796017740146
- type: nauc_precision_at_1000_max
value: 13.452350695770388
- type: nauc_precision_at_1000_std
value: 20.253057030417295
- type: nauc_precision_at_100_diff1
value: -15.676681024836736
- type: nauc_precision_at_100_max
value: 17.21039273342314
- type: nauc_precision_at_100_std
value: 23.503219057796482
- type: nauc_precision_at_10_diff1
value: -7.353821346474632
- type: nauc_precision_at_10_max
value: 22.963099870525657
- type: nauc_precision_at_10_std
value: 16.75138999512155
- type: nauc_precision_at_1_diff1
value: 30.096520602983773
- type: nauc_precision_at_1_max
value: 25.92117667316542
- type: nauc_precision_at_1_std
value: 6.628159094331555
- type: nauc_precision_at_20_diff1
value: -11.020811644697545
- type: nauc_precision_at_20_max
value: 21.625978665259115
- type: nauc_precision_at_20_std
value: 20.005095685790348
- type: nauc_precision_at_3_diff1
value: 7.003507657338856
- type: nauc_precision_at_3_max
value: 27.73371213700131
- type: nauc_precision_at_3_std
value: 9.668915001732463
- type: nauc_precision_at_5_diff1
value: -1.715206180870653
- type: nauc_precision_at_5_max
value: 24.29609734679536
- type: nauc_precision_at_5_std
value: 13.402584423111977
- type: nauc_recall_at_1000_diff1
value: 17.28590002253731
- type: nauc_recall_at_1000_max
value: 68.10425916894825
- type: nauc_recall_at_1000_std
value: 73.8411367347451
- type: nauc_recall_at_100_diff1
value: 18.442237799863165
- type: nauc_recall_at_100_max
value: 39.59374558744695
- type: nauc_recall_at_100_std
value: 38.54186929047189
- type: nauc_recall_at_10_diff1
value: 19.243325372129107
- type: nauc_recall_at_10_max
value: 19.111906153501202
- type: nauc_recall_at_10_std
value: 0.8737992988209908
- type: nauc_recall_at_1_diff1
value: 36.29159533442582
- type: nauc_recall_at_1_max
value: 14.017798723971604
- type: nauc_recall_at_1_std
value: -4.135744714942541
- type: nauc_recall_at_20_diff1
value: 19.01527783708535
- type: nauc_recall_at_20_max
value: 22.731910630901435
- type: nauc_recall_at_20_std
value: 5.981218642323668
- type: nauc_recall_at_3_diff1
value: 25.892436310762985
- type: nauc_recall_at_3_max
value: 18.9097432217694
- type: nauc_recall_at_3_std
value: -3.8494373478485033
- type: nauc_recall_at_5_diff1
value: 22.032856212342626
- type: nauc_recall_at_5_max
value: 16.22066351445006
- type: nauc_recall_at_5_std
value: -3.416429358868604
- type: ndcg_at_1
value: 47.524
- type: ndcg_at_10
value: 53.33
- type: ndcg_at_100
value: 61.746
- type: ndcg_at_1000
value: 62.803
- type: ndcg_at_20
value: 57.498000000000005
- type: ndcg_at_3
value: 46.204
- type: ndcg_at_5
value: 48.824
- type: precision_at_1
value: 47.524
- type: precision_at_10
value: 16.478
- type: precision_at_100
value: 2.5860000000000003
- type: precision_at_1000
value: 0.27799999999999997
- type: precision_at_20
value: 10.12
- type: precision_at_3
value: 31.735999999999997
- type: precision_at_5
value: 24.951999999999998
- type: recall_at_1
value: 23.51
- type: recall_at_10
value: 64.98899999999999
- type: recall_at_100
value: 92.241
- type: recall_at_1000
value: 97.929
- type: recall_at_20
value: 76.822
- type: recall_at_3
value: 42.126000000000005
- type: recall_at_5
value: 52.449
task:
type: Retrieval
- dataset:
config: ru
name: MTEB MassiveIntentClassification (ru)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 60.08069939475453
- type: f1
value: 56.18556634916303
- type: f1_weighted
value: 58.60322135027107
- type: main_score
value: 60.08069939475453
task:
type: Classification
- dataset:
config: ru
name: MTEB MassiveScenarioClassification (ru)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 66.43913920645595
- type: f1
value: 66.11191203959372
- type: f1_weighted
value: 65.72977001101279
- type: main_score
value: 66.43913920645595
task:
type: Classification
- dataset:
config: default
name: MTEB RUParaPhraserSTS
revision: 43265056790b8f7c59e0139acb4be0a8dad2c8f4
split: test
type: merionum/ru_paraphraser
metrics:
- type: cosine_pearson
value: 61.89012659088028
- type: cosine_spearman
value: 68.53279563915628
- type: euclidean_pearson
value: 65.64255392938036
- type: euclidean_spearman
value: 68.53279561028907
- type: main_score
value: 68.53279563915628
- type: manhattan_pearson
value: 65.52758148688461
- type: manhattan_spearman
value: 68.32426605891132
- type: pearson
value: 61.89012659088028
- type: spearman
value: 68.53279563915628
task:
type: STS
- dataset:
config: default
name: MTEB RiaNewsRetrieval
revision: 82374b0bbacda6114f39ff9c5b925fa1512ca5d7
split: test
type: ai-forever/ria-news-retrieval
metrics:
- type: main_score
value: 77.425
- type: map_at_1
value: 64.92
- type: map_at_10
value: 73.646
- type: map_at_100
value: 73.978
- type: map_at_1000
value: 73.988
- type: map_at_20
value: 73.872
- type: map_at_3
value: 72.128
- type: map_at_5
value: 73.083
- type: mrr_at_1
value: 64.92
- type: mrr_at_10
value: 73.64593650793611
- type: mrr_at_100
value: 73.97838585882688
- type: mrr_at_1000
value: 73.98842757843987
- type: mrr_at_20
value: 73.87221333104404
- type: mrr_at_3
value: 72.12833333333288
- type: mrr_at_5
value: 73.08333333333267
- type: nauc_map_at_1000_diff1
value: 70.38564962754138
- type: nauc_map_at_1000_max
value: 30.718444075784006
- type: nauc_map_at_1000_std
value: -10.69552302626205
- type: nauc_map_at_100_diff1
value: 70.37997156234715
- type: nauc_map_at_100_max
value: 30.725651745932925
- type: nauc_map_at_100_std
value: -10.685708218531655
- type: nauc_map_at_10_diff1
value: 70.3374861437528
- type: nauc_map_at_10_max
value: 30.749168340301246
- type: nauc_map_at_10_std
value: -10.799483498655107
- type: nauc_map_at_1_diff1
value: 73.9192388165348
- type: nauc_map_at_1_max
value: 28.442543674061532
- type: nauc_map_at_1_std
value: -11.831889393493318
- type: nauc_map_at_20_diff1
value: 70.34741729027523
- type: nauc_map_at_20_max
value: 30.734754088899564
- type: nauc_map_at_20_std
value: -10.686749277585324
- type: nauc_map_at_3_diff1
value: 70.21568887706891
- type: nauc_map_at_3_max
value: 30.467074420623437
- type: nauc_map_at_3_std
value: -11.472218305675923
- type: nauc_map_at_5_diff1
value: 70.34594531547204
- type: nauc_map_at_5_max
value: 30.754996331475464
- type: nauc_map_at_5_std
value: -11.084635295739732
- type: nauc_mrr_at_1000_diff1
value: 70.38565025595047
- type: nauc_mrr_at_1000_max
value: 30.718444183775805
- type: nauc_mrr_at_1000_std
value: -10.695523162874768
- type: nauc_mrr_at_100_diff1
value: 70.37997156234715
- type: nauc_mrr_at_100_max
value: 30.725651745932925
- type: nauc_mrr_at_100_std
value: -10.685708218531655
- type: nauc_mrr_at_10_diff1
value: 70.3374861437528
- type: nauc_mrr_at_10_max
value: 30.749168340301246
- type: nauc_mrr_at_10_std
value: -10.799483498655107
- type: nauc_mrr_at_1_diff1
value: 73.9192388165348
- type: nauc_mrr_at_1_max
value: 28.442543674061532
- type: nauc_mrr_at_1_std
value: -11.831889393493318
- type: nauc_mrr_at_20_diff1
value: 70.34741729027523
- type: nauc_mrr_at_20_max
value: 30.734754088899564
- type: nauc_mrr_at_20_std
value: -10.686749277585324
- type: nauc_mrr_at_3_diff1
value: 70.21568887706891
- type: nauc_mrr_at_3_max
value: 30.467074420623437
- type: nauc_mrr_at_3_std
value: -11.472218305675923
- type: nauc_mrr_at_5_diff1
value: 70.34594531547204
- type: nauc_mrr_at_5_max
value: 30.754996331475464
- type: nauc_mrr_at_5_std
value: -11.084635295739732
- type: nauc_ndcg_at_1000_diff1
value: 69.33016198036992
- type: nauc_ndcg_at_1000_max
value: 31.609803090952298
- type: nauc_ndcg_at_1000_std
value: -9.411221613110152
- type: nauc_ndcg_at_100_diff1
value: 69.13191582084188
- type: nauc_ndcg_at_100_max
value: 31.83693487089778
- type: nauc_ndcg_at_100_std
value: -9.0400895558464
- type: nauc_ndcg_at_10_diff1
value: 68.89462773551026
- type: nauc_ndcg_at_10_max
value: 31.87478936924236
- type: nauc_ndcg_at_10_std
value: -9.671029388622948
- type: nauc_ndcg_at_1_diff1
value: 73.9192388165348
- type: nauc_ndcg_at_1_max
value: 28.442543674061532
- type: nauc_ndcg_at_1_std
value: -11.831889393493318
- type: nauc_ndcg_at_20_diff1
value: 68.90205731804
- type: nauc_ndcg_at_20_max
value: 31.912656813093044
- type: nauc_ndcg_at_20_std
value: -9.090090804963808
- type: nauc_ndcg_at_3_diff1
value: 68.80670610482917
- type: nauc_ndcg_at_3_max
value: 31.18044464719784
- type: nauc_ndcg_at_3_std
value: -11.278491578164681
- type: nauc_ndcg_at_5_diff1
value: 68.97187216493903
- type: nauc_ndcg_at_5_max
value: 31.793607228058047
- type: nauc_ndcg_at_5_std
value: -10.481133374672472
- type: nauc_precision_at_1000_diff1
value: 43.78852990471418
- type: nauc_precision_at_1000_max
value: 56.047346474821055
- type: nauc_precision_at_1000_std
value: 35.73168397793686
- type: nauc_precision_at_100_diff1
value: 51.06009588636826
- type: nauc_precision_at_100_max
value: 50.40359839963674
- type: nauc_precision_at_100_std
value: 24.17139567398634
- type: nauc_precision_at_10_diff1
value: 60.308720843343444
- type: nauc_precision_at_10_max
value: 38.88883129762611
- type: nauc_precision_at_10_std
value: -1.9703986668774758
- type: nauc_precision_at_1_diff1
value: 73.9192388165348
- type: nauc_precision_at_1_max
value: 28.442543674061532
- type: nauc_precision_at_1_std
value: -11.831889393493318
- type: nauc_precision_at_20_diff1
value: 57.12901999287673
- type: nauc_precision_at_20_max
value: 42.275260619711744
- type: nauc_precision_at_20_std
value: 6.8998045953777165
- type: nauc_precision_at_3_diff1
value: 63.444192537561285
- type: nauc_precision_at_3_max
value: 33.87173673943739
- type: nauc_precision_at_3_std
value: -10.51740059765903
- type: nauc_precision_at_5_diff1
value: 62.70100972326122
- type: nauc_precision_at_5_max
value: 36.67473042882081
- type: nauc_precision_at_5_std
value: -7.4730688523228785
- type: nauc_recall_at_1000_diff1
value: 43.788529904715695
- type: nauc_recall_at_1000_max
value: 56.04734647482148
- type: nauc_recall_at_1000_std
value: 35.731683977938125
- type: nauc_recall_at_100_diff1
value: 51.06009588636825
- type: nauc_recall_at_100_max
value: 50.40359839963603
- type: nauc_recall_at_100_std
value: 24.171395673986428
- type: nauc_recall_at_10_diff1
value: 60.30872084334343
- type: nauc_recall_at_10_max
value: 38.88883129762609
- type: nauc_recall_at_10_std
value: -1.9703986668774112
- type: nauc_recall_at_1_diff1
value: 73.9192388165348
- type: nauc_recall_at_1_max
value: 28.442543674061532
- type: nauc_recall_at_1_std
value: -11.831889393493318
- type: nauc_recall_at_20_diff1
value: 57.12901999287683
- type: nauc_recall_at_20_max
value: 42.27526061971189
- type: nauc_recall_at_20_std
value: 6.899804595377761
- type: nauc_recall_at_3_diff1
value: 63.444192537561136
- type: nauc_recall_at_3_max
value: 33.87173673943714
- type: nauc_recall_at_3_std
value: -10.517400597659156
- type: nauc_recall_at_5_diff1
value: 62.70100972326114
- type: nauc_recall_at_5_max
value: 36.6747304288208
- type: nauc_recall_at_5_std
value: -7.473068852322717
- type: ndcg_at_1
value: 64.92
- type: ndcg_at_10
value: 77.425
- type: ndcg_at_100
value: 78.97
- type: ndcg_at_1000
value: 79.252
- type: ndcg_at_20
value: 78.23400000000001
- type: ndcg_at_3
value: 74.36399999999999
- type: ndcg_at_5
value: 76.081
- type: precision_at_1
value: 64.92
- type: precision_at_10
value: 8.907
- type: precision_at_100
value: 0.9610000000000001
- type: precision_at_1000
value: 0.098
- type: precision_at_20
value: 4.612
- type: precision_at_3
value: 26.933
- type: precision_at_5
value: 16.991999999999997
- type: recall_at_1
value: 64.92
- type: recall_at_10
value: 89.07000000000001
- type: recall_at_100
value: 96.14
- type: recall_at_1000
value: 98.39
- type: recall_at_20
value: 92.24
- type: recall_at_3
value: 80.80000000000001
- type: recall_at_5
value: 84.96000000000001
task:
type: Retrieval
- dataset:
config: default
name: MTEB RuBQReranking
revision: 2e96b8f098fa4b0950fc58eacadeb31c0d0c7fa2
split: test
type: ai-forever/rubq-reranking
metrics:
- type: main_score
value: 69.76660332457352
- type: map
value: 69.76660332457352
- type: mrr
value: 74.91840901415368
- type: nAUC_map_diff1
value: 40.77717577386574
- type: nAUC_map_max
value: 16.449821304849507
- type: nAUC_map_std
value: 5.464849678667512
- type: nAUC_mrr_diff1
value: 44.622323940651256
- type: nAUC_mrr_max
value: 20.915686008960645
- type: nAUC_mrr_std
value: 7.742740250688379
task:
type: Reranking
- dataset:
config: default
name: MTEB RuBQRetrieval
revision: e19b6ffa60b3bc248e0b41f4cc37c26a55c2a67b
split: test
type: ai-forever/rubq-retrieval
metrics:
- type: main_score
value: 67.753
- type: map_at_1
value: 38.111
- type: map_at_10
value: 59.25
- type: map_at_100
value: 60.291
- type: map_at_1000
value: 60.31999999999999
- type: map_at_20
value: 60.007
- type: map_at_3
value: 53.39699999999999
- type: map_at_5
value: 57.021
- type: mrr_at_1
value: 54.60992907801418
- type: mrr_at_10
value: 67.53055930804169
- type: mrr_at_100
value: 67.88621490413858
- type: mrr_at_1000
value: 67.89435419716948
- type: mrr_at_20
value: 67.80457820326059
- type: mrr_at_3
value: 64.98226950354619
- type: mrr_at_5
value: 66.6991725768323
- type: nauc_map_at_1000_diff1
value: 38.61460560253499
- type: nauc_map_at_1000_max
value: 24.238741006152296
- type: nauc_map_at_1000_std
value: -12.553887111841771
- type: nauc_map_at_100_diff1
value: 38.604995328219836
- type: nauc_map_at_100_max
value: 24.25372744693149
- type: nauc_map_at_100_std
value: -12.525907529455832
- type: nauc_map_at_10_diff1
value: 38.2802363146203
- type: nauc_map_at_10_max
value: 24.148397487087742
- type: nauc_map_at_10_std
value: -13.02462313254209
- type: nauc_map_at_1_diff1
value: 42.20333973944006
- type: nauc_map_at_1_max
value: 16.04455015933995
- type: nauc_map_at_1_std
value: -11.426950122484298
- type: nauc_map_at_20_diff1
value: 38.49874303734095
- type: nauc_map_at_20_max
value: 24.27079948779279
- type: nauc_map_at_20_std
value: -12.643735833974782
- type: nauc_map_at_3_diff1
value: 38.393442128336126
- type: nauc_map_at_3_max
value: 21.120395203124264
- type: nauc_map_at_3_std
value: -14.57118408415527
- type: nauc_map_at_5_diff1
value: 37.98874776320297
- type: nauc_map_at_5_max
value: 22.75390581241078
- type: nauc_map_at_5_std
value: -13.871096120655116
- type: nauc_mrr_at_1000_diff1
value: 45.08121396075722
- type: nauc_mrr_at_1000_max
value: 27.331313499687486
- type: nauc_mrr_at_1000_std
value: -13.114787616167014
- type: nauc_mrr_at_100_diff1
value: 45.082808269851654
- type: nauc_mrr_at_100_max
value: 27.343021375586257
- type: nauc_mrr_at_100_std
value: -13.104901642101272
- type: nauc_mrr_at_10_diff1
value: 44.89445664817906
- type: nauc_mrr_at_10_max
value: 27.483504407572795
- type: nauc_mrr_at_10_std
value: -13.116664114214782
- type: nauc_mrr_at_1_diff1
value: 47.43773937564259
- type: nauc_mrr_at_1_max
value: 24.3996512246477
- type: nauc_mrr_at_1_std
value: -13.283010969155859
- type: nauc_mrr_at_20_diff1
value: 45.08382953390109
- type: nauc_mrr_at_20_max
value: 27.418666231602508
- type: nauc_mrr_at_20_std
value: -13.101239027782416
- type: nauc_mrr_at_3_diff1
value: 44.695558812456625
- type: nauc_mrr_at_3_max
value: 26.75153207261083
- type: nauc_mrr_at_3_std
value: -14.019251949468694
- type: nauc_mrr_at_5_diff1
value: 44.84929587390349
- type: nauc_mrr_at_5_max
value: 27.508337265101257
- type: nauc_mrr_at_5_std
value: -13.748841022127815
- type: nauc_ndcg_at_1000_diff1
value: 39.706451835474724
- type: nauc_ndcg_at_1000_max
value: 26.633343785995507
- type: nauc_ndcg_at_1000_std
value: -11.207900377782707
- type: nauc_ndcg_at_100_diff1
value: 39.49574863029789
- type: nauc_ndcg_at_100_max
value: 27.03615356082193
- type: nauc_ndcg_at_100_std
value: -10.456416625790485
- type: nauc_ndcg_at_10_diff1
value: 38.36118560524438
- type: nauc_ndcg_at_10_max
value: 27.29115954765498
- type: nauc_ndcg_at_10_std
value: -12.026533782516182
- type: nauc_ndcg_at_1_diff1
value: 47.43773937564259
- type: nauc_ndcg_at_1_max
value: 24.3996512246477
- type: nauc_ndcg_at_1_std
value: -13.283010969155859
- type: nauc_ndcg_at_20_diff1
value: 39.11328986667616
- type: nauc_ndcg_at_20_max
value: 27.48803343585931
- type: nauc_ndcg_at_20_std
value: -11.061481936299867
- type: nauc_ndcg_at_3_diff1
value: 38.09080511583124
- type: nauc_ndcg_at_3_max
value: 22.960624575385577
- type: nauc_ndcg_at_3_std
value: -15.162532187246452
- type: nauc_ndcg_at_5_diff1
value: 37.84051905054443
- type: nauc_ndcg_at_5_max
value: 24.859831442018766
- type: nauc_ndcg_at_5_std
value: -14.208813731290032
- type: nauc_precision_at_1000_diff1
value: -8.235293550747457
- type: nauc_precision_at_1000_max
value: 7.564714965839937
- type: nauc_precision_at_1000_std
value: 5.160867910754626
- type: nauc_precision_at_100_diff1
value: -6.654255562369982
- type: nauc_precision_at_100_max
value: 10.671679751630798
- type: nauc_precision_at_100_std
value: 7.057997024307852
- type: nauc_precision_at_10_diff1
value: 0.4759476932076396
- type: nauc_precision_at_10_max
value: 18.705407595194696
- type: nauc_precision_at_10_std
value: 1.1284269201001864
- type: nauc_precision_at_1_diff1
value: 47.43773937564259
- type: nauc_precision_at_1_max
value: 24.3996512246477
- type: nauc_precision_at_1_std
value: -13.283010969155859
- type: nauc_precision_at_20_diff1
value: -3.1830019504133027
- type: nauc_precision_at_20_max
value: 15.311012950383418
- type: nauc_precision_at_20_std
value: 4.411311445012971
- type: nauc_precision_at_3_diff1
value: 14.900799832530298
- type: nauc_precision_at_3_max
value: 21.59448854239842
- type: nauc_precision_at_3_std
value: -10.383301518031464
- type: nauc_precision_at_5_diff1
value: 6.129583634729085
- type: nauc_precision_at_5_max
value: 19.764705099171525
- type: nauc_precision_at_5_std
value: -4.931119926816597
- type: nauc_recall_at_1000_diff1
value: 7.393009712112532
- type: nauc_recall_at_1000_max
value: 49.79443106358621
- type: nauc_recall_at_1000_std
value: 74.80255240755591
- type: nauc_recall_at_100_diff1
value: 19.35257139711146
- type: nauc_recall_at_100_max
value: 42.80851742013903
- type: nauc_recall_at_100_std
value: 37.546560048377444
- type: nauc_recall_at_10_diff1
value: 24.621169385136398
- type: nauc_recall_at_10_max
value: 33.22268204638332
- type: nauc_recall_at_10_std
value: -4.7401788730268235
- type: nauc_recall_at_1_diff1
value: 42.20333973944006
- type: nauc_recall_at_1_max
value: 16.04455015933995
- type: nauc_recall_at_1_std
value: -11.426950122484298
- type: nauc_recall_at_20_diff1
value: 24.927652532242657
- type: nauc_recall_at_20_max
value: 38.260344944664766
- type: nauc_recall_at_20_std
value: 5.423281114042867
- type: nauc_recall_at_3_diff1
value: 30.44227595912427
- type: nauc_recall_at_3_max
value: 19.94976153694003
- type: nauc_recall_at_3_std
value: -15.928733556196534
- type: nauc_recall_at_5_diff1
value: 27.044814357935724
- type: nauc_recall_at_5_max
value: 23.824668491154366
- type: nauc_recall_at_5_std
value: -13.992845356113314
- type: ndcg_at_1
value: 54.61
- type: ndcg_at_10
value: 67.753
- type: ndcg_at_100
value: 70.926
- type: ndcg_at_1000
value: 71.41
- type: ndcg_at_20
value: 69.61500000000001
- type: ndcg_at_3
value: 59.678
- type: ndcg_at_5
value: 64.012
- type: precision_at_1
value: 54.61
- type: precision_at_10
value: 13.747000000000002
- type: precision_at_100
value: 1.601
- type: precision_at_1000
value: 0.166
- type: precision_at_20
value: 7.446999999999999
- type: precision_at_3
value: 33.255
- type: precision_at_5
value: 23.747
- type: recall_at_1
value: 38.111
- type: recall_at_10
value: 83.878
- type: recall_at_100
value: 95.84899999999999
- type: recall_at_1000
value: 99.05199999999999
- type: recall_at_20
value: 90.048
- type: recall_at_3
value: 64.126
- type: recall_at_5
value: 74.295
task:
type: Retrieval
- dataset:
config: default
name: MTEB RuReviewsClassification
revision: f6d2c31f4dc6b88f468552750bfec05b4b41b05a
split: test
type: ai-forever/ru-reviews-classification
metrics:
- type: accuracy
value: 66.0888671875
- type: f1
value: 63.79342584872498
- type: f1_weighted
value: 63.79112620928187
- type: main_score
value: 66.0888671875
task:
type: Classification
- dataset:
config: default
name: MTEB RuSTSBenchmarkSTS
revision: 7cf24f325c6da6195df55bef3d86b5e0616f3018
split: test
type: ai-forever/ru-stsbenchmark-sts
metrics:
- type: cosine_pearson
value: 78.40381860532754
- type: cosine_spearman
value: 78.44128247246344
- type: euclidean_pearson
value: 77.03436669125563
- type: euclidean_spearman
value: 78.44009017152538
- type: main_score
value: 78.44128247246344
- type: manhattan_pearson
value: 77.084766201637
- type: manhattan_spearman
value: 78.46899044600028
- type: pearson
value: 78.40381860532754
- type: spearman
value: 78.44128247246344
task:
type: STS
- dataset:
config: default
name: MTEB RuSciBenchGRNTIClassification
revision: 673a610d6d3dd91a547a0d57ae1b56f37ebbf6a1
split: test
type: ai-forever/ru-scibench-grnti-classification
metrics:
- type: accuracy
value: 61.4111328125
- type: f1
value: 59.604229603854044
- type: f1_weighted
value: 59.61906710038802
- type: main_score
value: 61.4111328125
task:
type: Classification
- dataset:
config: default
name: MTEB RuSciBenchGRNTIClusteringP2P
revision: 673a610d6d3dd91a547a0d57ae1b56f37ebbf6a1
split: test
type: ai-forever/ru-scibench-grnti-classification
metrics:
- type: main_score
value: 55.660781672610625
- type: v_measure
value: 55.660781672610625
- type: v_measure_std
value: 1.0880487214373578
task:
type: Clustering
- dataset:
config: default
name: MTEB RuSciBenchOECDClassification
revision: 26c88e99dcaba32bb45d0e1bfc21902337f6d471
split: test
type: ai-forever/ru-scibench-oecd-classification
metrics:
- type: accuracy
value: 48.6669921875
- type: f1
value: 46.24529719568694
- type: f1_weighted
value: 46.24736172369365
- type: main_score
value: 48.6669921875
task:
type: Classification
- dataset:
config: default
name: MTEB RuSciBenchOECDClusteringP2P
revision: 26c88e99dcaba32bb45d0e1bfc21902337f6d471
split: test
type: ai-forever/ru-scibench-oecd-classification
metrics:
- type: main_score
value: 47.95513383500326
- type: v_measure
value: 47.95513383500326
- type: v_measure_std
value: 0.9391146092620886
task:
type: Clustering
- dataset:
config: ru
name: MTEB STS22 (ru)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 65.27471390704719
- type: cosine_spearman
value: 68.12010913287949
- type: euclidean_pearson
value: 65.60124415285192
- type: euclidean_spearman
value: 68.12010913287949
- type: main_score
value: 68.12010913287949
- type: manhattan_pearson
value: 65.21850751060232
- type: manhattan_spearman
value: 67.85162022914248
- type: pearson
value: 65.27471390704719
- type: spearman
value: 68.12010913287949
task:
type: STS
- dataset:
config: default
name: MTEB SensitiveTopicsClassification
revision: 416b34a802308eac30e4192afc0ff99bb8dcc7f2
split: test
type: ai-forever/sensitive-topics-classification
metrics:
- type: accuracy
value: 30.0537109375
- type: f1
value: 35.12028781898003
- type: lrap
value: 45.91071234808953
- type: main_score
value: 30.0537109375
task:
type: MultilabelClassification
- dataset:
config: default
name: MTEB TERRa
revision: 7b58f24536063837d644aab9a023c62199b2a612
split: dev
type: ai-forever/terra-pairclassification
metrics:
- type: cosine_accuracy
value: 60.91205211726385
- type: cosine_accuracy_threshold
value: 68.15387606620789
- type: cosine_ap
value: 57.705995373862805
- type: cosine_f1
value: 67.57990867579909
- type: cosine_f1_threshold
value: 54.87680435180664
- type: cosine_precision
value: 51.92982456140351
- type: cosine_recall
value: 96.73202614379085
- type: dot_accuracy
value: 60.91205211726385
- type: dot_accuracy_threshold
value: 68.15387010574341
- type: dot_ap
value: 57.705995373862805
- type: dot_f1
value: 67.57990867579909
- type: dot_f1_threshold
value: 54.87680435180664
- type: dot_precision
value: 51.92982456140351
- type: dot_recall
value: 96.73202614379085
- type: euclidean_accuracy
value: 60.91205211726385
- type: euclidean_accuracy_threshold
value: 79.80742454528809
- type: euclidean_ap
value: 57.705995373862805
- type: euclidean_f1
value: 67.57990867579909
- type: euclidean_f1_threshold
value: 94.99809741973877
- type: euclidean_precision
value: 51.92982456140351
- type: euclidean_recall
value: 96.73202614379085
- type: main_score
value: 57.705995373862805
- type: manhattan_accuracy
value: 60.586319218241044
- type: manhattan_accuracy_threshold
value: 1858.333969116211
- type: manhattan_ap
value: 57.53277048517774
- type: manhattan_f1
value: 67.59259259259261
- type: manhattan_f1_threshold
value: 2154.4769287109375
- type: manhattan_precision
value: 52.32974910394266
- type: manhattan_recall
value: 95.42483660130719
- type: max_ap
value: 57.705995373862805
- type: max_f1
value: 67.59259259259261
- type: max_precision
value: 52.32974910394266
- type: max_recall
value: 96.73202614379085
- type: similarity_accuracy
value: 60.91205211726385
- type: similarity_accuracy_threshold
value: 68.15387606620789
- type: similarity_ap
value: 57.705995373862805
- type: similarity_f1
value: 67.57990867579909
- type: similarity_f1_threshold
value: 54.87680435180664
- type: similarity_precision
value: 51.92982456140351
- type: similarity_recall
value: 96.73202614379085
task:
type: PairClassification
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
-
<h1 align="center">KaLM-Embedding</h1>
**KaLM-Embedding** is a series of embedding models adapted from auto-regressive LLMs with superior training data.
KaLM-embedding-multilingual-mini is trained from [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B) with massive weakly-supervised pre-training and supervised fine-tuning data.
## 📑 Open-source Plan
- [x] Model Checkpoint
- [x] [KaLM-embedding-multilingual-mini-v1](https://huggingface.co/HIT-TMG/KaLM-embedding-multilingual-mini-v1)
- [x] [KaLM-embedding-multilingual-mini-instruct-v1](https://huggingface.co/HIT-TMG/KaLM-embedding-multilingual-mini-instruct-v1)
- [x] [KaLM-embedding-multilingual-mini-instruct-v1.5](https://huggingface.co/HIT-TMG/KaLM-embedding-multilingual-mini-instruct-v1.5)
- [ ] KaLM-embedding-multilingual-max-v1
- [x] Training and Evaluation Code: [HITsz-TMG/KaLM-Embedding](https://github.com/HITsz-TMG/KaLM-Embedding)
- [x] Technical Report: [KaLM-Embedding: Superior Training Data Brings A Stronger Embedding Model](https://arxiv.org/abs/2501.01028)
- [ ] Training Data
## Evaluation
| Model Name | Model Size | C-MTEB(35) | MTEB(56) | avg
|:----:|:---:|:---:|:---:|:---:|
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 560M | 58.81 | 61.5 | 60.16
| [bge-m3 (dense)](https://huggingface.co/BAAI/bge-m3) | 560M | 60.80 | 59.84 | 60.32
| [gte-multilingual-base (dense)](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) | **305M** | 62.72 | 61.40 | 62.06
| [KaLM-embedding-multilingual-mini-v1](https://huggingface.co/HIT-TMG/KaLM-embedding-multilingual-mini-v1) | 494M | 62.31 | 61.87 | 62.09
| [KaLM-embedding-multilingual-mini-instruct-v1](https://huggingface.co/HIT-TMG/KaLM-embedding-multilingual-mini-instruct-v1) | 494M | 63.57 | 64.74 | 64.16
| [KaLM-embedding-multilingual-mini-instruct-v1.5](https://huggingface.co/HIT-TMG/KaLM-embedding-multilingual-mini-instruct-v1.5) | 494M | **64.13** | **64.94** | **64.53**
## Requirements
Since we have used the Qwen2 model, we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Usage
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME_OR_PATH}') # Do NOT set trust_remote_code
model.max_seq_length = 512
embeddings = model.encode(
sentences,
normalize_embeddings=True,
batch_size=256,
show_progress_bar=True
)
print(embeddings)
```
<!-- We add instruction for asymmetric tasks: retrieval, reranking, classification and clustering. -->
We add instruction for classification and clustering.
If you want to add instruction to the query (no instruction for the corpus), you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME_OR_PATH}') # Do NOT set trust_remote_code
model.max_seq_length = 512
prompt = "Instruct: Classifying the category of french news. \n Query: "
embeddings = model.encode(
sentences,
prompt=prompt,
normalize_embeddings=True,
batch_size=256,
show_progress_bar=True
)
print(embeddings)
```
## Contact
If you encounter any issue, feel free to contact us via the email: [email protected]
|
KoichiYasuoka/bert-base-vietnamese-upos | KoichiYasuoka | 2025-01-03T07:18:17Z | 125 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"vietnamese",
"pos",
"dependency-parsing",
"vi",
"dataset:universal_dependencies",
"base_model:FPTAI/vibert-base-cased",
"base_model:finetune:FPTAI/vibert-base-cased",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-12-06T08:47:03Z | ---
language:
- "vi"
tags:
- "vietnamese"
- "token-classification"
- "pos"
- "dependency-parsing"
base_model: FPTAI/vibert-base-cased
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "Hai cái đầu thì tốt hơn một."
---
# bert-base-vietnamese-upos
## Model Description
This is a BERT model pre-trained on Vietnamese texts for POS-tagging and dependency-parsing, derived from [vibert-base-cased](https://huggingface.co/FPTAI/vibert-base-cased). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/)(Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-vietnamese-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-vietnamese-upos")
pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple")
nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)]
print(nlp("Hai cái đầu thì tốt hơn một."))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-base-vietnamese-upos")
print(nlp("Hai cái đầu thì tốt hơn một."))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
mradermacher/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B-GGUF | mradermacher | 2025-01-03T07:12:10Z | 102 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:zelk12/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B",
"base_model:quantized:zelk12/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-03T06:46:26Z | ---
base_model: zelk12/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/zelk12/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B-GGUF/resolve/main/MT-Max-Merge_02012025163610-MUGBI-gemma-2-9B.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
QuantFactory/HuatuoGPT-o1-7B-GGUF | QuantFactory | 2025-01-03T07:08:44Z | 656 | 4 | null | [
"gguf",
"medical",
"text-generation",
"en",
"zh",
"dataset:FreedomIntelligence/medical-o1-reasoning-SFT",
"dataset:FreedomIntelligence/medical-o1-verifiable-problem",
"arxiv:2412.18925",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2025-01-03T06:29:50Z |
---
license: apache-2.0
datasets:
- FreedomIntelligence/medical-o1-reasoning-SFT
- FreedomIntelligence/medical-o1-verifiable-problem
language:
- en
- zh
base_model:
- Qwen/Qwen2.5-7B-Instruct
pipeline_tag: text-generation
tags:
- medical
---
[](https://hf.co/QuantFactory)
# QuantFactory/HuatuoGPT-o1-7B-GGUF
This is quantized version of [FreedomIntelligence/HuatuoGPT-o1-7B](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-7B) created using llama.cpp
# Original Model Card
<div align="center">
<h1>
HuatuoGPT-o1-7B
</h1>
</div>
<div align="center">
<a href="https://github.com/FreedomIntelligence/HuatuoGPT-o1" target="_blank">GitHub</a> | <a href="https://arxiv.org/pdf/2412.18925" target="_blank">Paper</a>
</div>
# <span>Introduction</span>
**HuatuoGPT-o1** is a medical LLM designed for advanced medical reasoning. It generates a complex thought process, reflecting and refining its reasoning, before providing a final response.
For more information, visit our GitHub repository:
[https://github.com/FreedomIntelligence/HuatuoGPT-o1](https://github.com/FreedomIntelligence/HuatuoGPT-o1).
# <span>Model Info</span>
| | Backbone | Supported Languages | Link |
| -------------------- | ------------ | ----- | --------------------------------------------------------------------- |
| **HuatuoGPT-o1-8B** | LLaMA-3.1-8B | English | [HF Link](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-8B) |
| **HuatuoGPT-o1-70B** | LLaMA-3.1-70B | English | [HF Link](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-70B) |
| **HuatuoGPT-o1-7B** | Qwen2.5-7B | English & Chinese | [HF Link](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-7B) |
| **HuatuoGPT-o1-72B** | Qwen2.5-72B | English & Chinese | [HF Link](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-72B) |
# <span>Usage</span>
You can use HuatuoGPT-o1-7B in the same way as `Qwen2.5-7B-Instruct`. You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm) or [Sglang](https://github.com/sgl-project/sglang), or perform direct inference:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("FreedomIntelligence/HuatuoGPT-o1-7B",torch_dtype="auto",device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("FreedomIntelligence/HuatuoGPT-o1-7B")
input_text = "How to stop a cough?"
messages = [{"role": "user", "content": input_text}]
inputs = tokenizer(tokenizer.apply_chat_template(messages, tokenize=False,add_generation_prompt=True
), return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=2048)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
HuatuoGPT-o1 adopts a *thinks-before-it-answers* approach, with outputs formatted as:
```
## Thinking
[Reasoning process]
## Final Response
[Output]
```
# <span>📖 Citation</span>
```
@misc{chen2024huatuogpto1medicalcomplexreasoning,
title={HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs},
author={Junying Chen and Zhenyang Cai and Ke Ji and Xidong Wang and Wanlong Liu and Rongsheng Wang and Jianye Hou and Benyou Wang},
year={2024},
eprint={2412.18925},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.18925},
}
```
|
Jopqior/ilql-model | Jopqior | 2025-01-03T07:08:42Z | 148 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-03T07:08:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KoichiYasuoka/roberta-classical-chinese-base-upos | KoichiYasuoka | 2025-01-03T07:07:47Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"classical chinese",
"literary chinese",
"ancient chinese",
"pos",
"dependency-parsing",
"lzh",
"dataset:universal_dependencies",
"base_model:KoichiYasuoka/roberta-classical-chinese-base-char",
"base_model:finetune:KoichiYasuoka/roberta-classical-chinese-base-char",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:04Z | ---
language:
- "lzh"
tags:
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "token-classification"
- "pos"
- "dependency-parsing"
base_model: KoichiYasuoka/roberta-classical-chinese-base-char
datasets:
- "universal_dependencies"
license: "apache-2.0"
pipeline_tag: "token-classification"
widget:
- text: "子曰學而時習之不亦説乎有朋自遠方來不亦樂乎人不知而不慍不亦君子乎"
---
# roberta-classical-chinese-base-upos
## Model Description
This is a RoBERTa model pre-trained on Classical Chinese texts for POS-tagging and dependency-parsing, derived from [roberta-classical-chinese-base-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-char). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-classical-chinese-base-upos")
```
## Reference
Koichi Yasuoka: [Universal Dependencies Treebank of the Four Books in Classical Chinese](http://hdl.handle.net/2433/245217), DADH2019: 10th International Conference of Digital Archives and Digital Humanities (December 2019), pp.20-28.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
dgambettavuw/M_gen0_run2_llama2-7b_xlsum_doc1000_real64_synt64_vuw | dgambettavuw | 2025-01-03T06:48:35Z | 168 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-01-03T06:45:44Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aztro/mabamasdxl | aztro | 2025-01-03T06:42:20Z | 5 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:John6666/epicrealism-xl-v8kiss-sdxl",
"base_model:adapter:John6666/epicrealism-xl-v8kiss-sdxl",
"license:mit",
"region:us"
]
| text-to-image | 2025-01-03T06:39:52Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
photo of mabama, from back, from back, sleek long black hair, small waist,
thick thighs, wearing a light blue stretch denim mini cargo skirt and a top
tank, unique skin patterns, natural imperfections, dramatic lighting, soft
shadows, cinematic atmosphere, hyper-detailed, high-quality photography,
immersive and artistic composition, blurred background, low key (dark and
moody) visual style
parameters:
negative_prompt: >-
(hands, blurry, low quality, bad anatomy, text, watermark, poorly rendered
details)
output:
url: images/Captura de pantalla 2024-12-19 005310.png
base_model:
- John6666/epicrealism-xl-v8kiss-sdxl
instance_prompt: mabama
license: mit
pipeline_tag: text-to-image
---
# mabamasdxl
<Gallery />
## Model description
mabam

## Trigger words
You should use `mabama` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/aztro/mabamasdxl/tree/main) them in the Files & versions tab. |
dimasik1987/453d5dde-5b2e-41b2-8719-b229c561d9de | dimasik1987 | 2025-01-03T06:38:51Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mixtral",
"axolotl",
"generated_from_trainer",
"base_model:Eurdem/Defne_llama3_2x8B",
"base_model:adapter:Eurdem/Defne_llama3_2x8B",
"license:llama3",
"region:us"
]
| null | 2025-01-03T04:31:37Z | ---
library_name: peft
license: llama3
base_model: Eurdem/Defne_llama3_2x8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 453d5dde-5b2e-41b2-8719-b229c561d9de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Eurdem/Defne_llama3_2x8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e76edf5b89c88ac9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e76edf5b89c88ac9_train_data.json
type:
field_input: system_prompt
field_instruction: question
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: dimasik1987/453d5dde-5b2e-41b2-8719-b229c561d9de
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/e76edf5b89c88ac9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 453d5dde-5b2e-41b2-8719-b229c561d9de
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 453d5dde-5b2e-41b2-8719-b229c561d9de
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 453d5dde-5b2e-41b2-8719-b229c561d9de
This model is a fine-tuned version of [Eurdem/Defne_llama3_2x8B](https://huggingface.co/Eurdem/Defne_llama3_2x8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 0.0 | 0.0004 | 8 | nan |
| 0.0 | 0.0007 | 16 | nan |
| 0.0 | 0.0011 | 24 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Immy_v3-i1-GGUF | mradermacher | 2025-01-03T06:34:57Z | 28 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"instruction-following",
"unsloth",
"llama",
"trl",
"en",
"base_model:critical-hf/Immy_v3",
"base_model:quantized:critical-hf/Immy_v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| text-generation | 2025-01-03T02:56:20Z | ---
base_model: critical-hf/Immy_v3
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation
- instruction-following
- transformers
- unsloth
- llama
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/critical-hf/Immy_v3
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Immy_v3-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Immy_v3-i1-GGUF/resolve/main/Immy_v3.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Immy_v3-i1-GGUF/resolve/main/Immy_v3.i1-IQ1_M.gguf) | i1-IQ1_M | 0.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Immy_v3-i1-GGUF/resolve/main/Immy_v3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Immy_v3-i1-GGUF/resolve/main/Immy_v3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Immy_v3-i1-GGUF/resolve/main/Immy_v3.i1-IQ2_S.gguf) | i1-IQ2_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Immy_v3-i1-GGUF/resolve/main/Immy_v3.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Immy_v3-i1-GGUF/resolve/main/Immy_v3.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Immy_v3-i1-GGUF/resolve/main/Immy_v3.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Immy_v3-i1-GGUF/resolve/main/Immy_v3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Immy_v3-i1-GGUF/resolve/main/Immy_v3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Immy_v3-i1-GGUF/resolve/main/Immy_v3.i1-IQ3_S.gguf) | i1-IQ3_S | 0.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Immy_v3-i1-GGUF/resolve/main/Immy_v3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Immy_v3-i1-GGUF/resolve/main/Immy_v3.i1-IQ3_M.gguf) | i1-IQ3_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Immy_v3-i1-GGUF/resolve/main/Immy_v3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Immy_v3-i1-GGUF/resolve/main/Immy_v3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Immy_v3-i1-GGUF/resolve/main/Immy_v3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Immy_v3-i1-GGUF/resolve/main/Immy_v3.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.1 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Immy_v3-i1-GGUF/resolve/main/Immy_v3.i1-Q4_0.gguf) | i1-Q4_0 | 1.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Immy_v3-i1-GGUF/resolve/main/Immy_v3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Immy_v3-i1-GGUF/resolve/main/Immy_v3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Immy_v3-i1-GGUF/resolve/main/Immy_v3.i1-Q4_1.gguf) | i1-Q4_1 | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Immy_v3-i1-GGUF/resolve/main/Immy_v3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Immy_v3-i1-GGUF/resolve/main/Immy_v3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Immy_v3-i1-GGUF/resolve/main/Immy_v3.i1-Q6_K.gguf) | i1-Q6_K | 1.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
PassbyGrocer/hreb-msra | PassbyGrocer | 2025-01-03T06:34:18Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:hfl/chinese-roberta-wwm-ext-large",
"base_model:finetune:hfl/chinese-roberta-wwm-ext-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2025-01-02T06:03:44Z | ---
library_name: transformers
license: apache-2.0
base_model: hfl/chinese-roberta-wwm-ext-large
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: robert_bilstm_mega_res-ner-msra-ner-ner-msra-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robert_bilstm_mega_res-ner-msra-ner-ner-msra-ner
This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext-large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0621
- Precision: 0.9538
- Recall: 0.9573
- F1: 0.9555
- Accuracy: 0.9940
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0239 | 1.0 | 725 | 0.0232 | 0.9242 | 0.9344 | 0.9293 | 0.9931 |
| 0.0139 | 2.0 | 1450 | 0.0254 | 0.9373 | 0.9459 | 0.9416 | 0.9925 |
| 0.006 | 3.0 | 2175 | 0.0294 | 0.9415 | 0.9480 | 0.9448 | 0.9930 |
| 0.0052 | 4.0 | 2900 | 0.0303 | 0.9389 | 0.9486 | 0.9437 | 0.9937 |
| 0.0049 | 5.0 | 3625 | 0.0303 | 0.9422 | 0.9498 | 0.9459 | 0.9933 |
| 0.0034 | 6.0 | 4350 | 0.0353 | 0.9411 | 0.9594 | 0.9502 | 0.9934 |
| 0.0015 | 7.0 | 5075 | 0.0372 | 0.9404 | 0.9498 | 0.9450 | 0.9927 |
| 0.0013 | 8.0 | 5800 | 0.0379 | 0.9477 | 0.9492 | 0.9485 | 0.9938 |
| 0.0006 | 9.0 | 6525 | 0.0405 | 0.9516 | 0.9502 | 0.9509 | 0.9937 |
| 0.0039 | 10.0 | 7250 | 0.0442 | 0.9420 | 0.9536 | 0.9478 | 0.9931 |
| 0.0013 | 11.0 | 7975 | 0.0393 | 0.9479 | 0.9528 | 0.9504 | 0.9936 |
| 0.001 | 12.0 | 8700 | 0.0431 | 0.9455 | 0.9513 | 0.9484 | 0.9933 |
| 0.0011 | 13.0 | 9425 | 0.0431 | 0.9487 | 0.9425 | 0.9455 | 0.9936 |
| 0.0003 | 14.0 | 10150 | 0.0425 | 0.9392 | 0.9450 | 0.9421 | 0.9933 |
| 0.0001 | 15.0 | 10875 | 0.0456 | 0.9475 | 0.9515 | 0.9495 | 0.9937 |
| 0.0011 | 16.0 | 11600 | 0.0446 | 0.9467 | 0.9471 | 0.9469 | 0.9928 |
| 0.0002 | 17.0 | 12325 | 0.0500 | 0.9532 | 0.9457 | 0.9495 | 0.9933 |
| 0.0001 | 18.0 | 13050 | 0.0504 | 0.9479 | 0.9490 | 0.9485 | 0.9929 |
| 0.0002 | 19.0 | 13775 | 0.0455 | 0.9463 | 0.9527 | 0.9495 | 0.9933 |
| 0.0013 | 20.0 | 14500 | 0.0471 | 0.9487 | 0.9544 | 0.9515 | 0.9933 |
| 0.0005 | 21.0 | 15225 | 0.0425 | 0.9491 | 0.9584 | 0.9537 | 0.9936 |
| 0.0009 | 22.0 | 15950 | 0.0503 | 0.9455 | 0.9555 | 0.9505 | 0.9931 |
| 0.0003 | 23.0 | 16675 | 0.0474 | 0.9530 | 0.9555 | 0.9543 | 0.9938 |
| 0.0006 | 24.0 | 17400 | 0.0481 | 0.9531 | 0.9538 | 0.9534 | 0.9937 |
| 0.0013 | 25.0 | 18125 | 0.0502 | 0.9467 | 0.9534 | 0.9500 | 0.9934 |
| 0.0001 | 26.0 | 18850 | 0.0517 | 0.9461 | 0.9492 | 0.9476 | 0.9933 |
| 0.0001 | 27.0 | 19575 | 0.0410 | 0.9536 | 0.9530 | 0.9533 | 0.9937 |
| 0.0011 | 28.0 | 20300 | 0.0453 | 0.9520 | 0.9498 | 0.9509 | 0.9937 |
| 0.0007 | 29.0 | 21025 | 0.0444 | 0.9479 | 0.9480 | 0.9479 | 0.9935 |
| 0.0 | 30.0 | 21750 | 0.0498 | 0.9529 | 0.9498 | 0.9513 | 0.9937 |
| 0.0001 | 31.0 | 22475 | 0.0490 | 0.9514 | 0.9496 | 0.9505 | 0.9935 |
| 0.001 | 32.0 | 23200 | 0.0499 | 0.9495 | 0.9486 | 0.9491 | 0.9934 |
| 0.0001 | 33.0 | 23925 | 0.0451 | 0.9499 | 0.9557 | 0.9528 | 0.9939 |
| 0.0002 | 34.0 | 24650 | 0.0469 | 0.9486 | 0.9563 | 0.9525 | 0.9937 |
| 0.0001 | 35.0 | 25375 | 0.0505 | 0.9568 | 0.9496 | 0.9532 | 0.9938 |
| 0.0003 | 36.0 | 26100 | 0.0491 | 0.9593 | 0.9525 | 0.9559 | 0.9942 |
| 0.0005 | 37.0 | 26825 | 0.0432 | 0.9551 | 0.9532 | 0.9542 | 0.9939 |
| 0.0003 | 38.0 | 27550 | 0.0465 | 0.9536 | 0.9486 | 0.9511 | 0.9937 |
| 0.0019 | 39.0 | 28275 | 0.0491 | 0.9574 | 0.9469 | 0.9521 | 0.9937 |
| 0.0 | 40.0 | 29000 | 0.0470 | 0.9582 | 0.9534 | 0.9558 | 0.9940 |
| 0.0008 | 41.0 | 29725 | 0.0477 | 0.9505 | 0.9538 | 0.9522 | 0.9937 |
| 0.0 | 42.0 | 30450 | 0.0544 | 0.9500 | 0.9542 | 0.9521 | 0.9937 |
| 0.0002 | 43.0 | 31175 | 0.0527 | 0.9571 | 0.9492 | 0.9531 | 0.9938 |
| 0.0005 | 44.0 | 31900 | 0.0510 | 0.9574 | 0.9513 | 0.9543 | 0.9939 |
| 0.0006 | 45.0 | 32625 | 0.0478 | 0.9527 | 0.9536 | 0.9532 | 0.9938 |
| 0.0001 | 46.0 | 33350 | 0.0464 | 0.9559 | 0.9517 | 0.9538 | 0.9937 |
| 0.0001 | 47.0 | 34075 | 0.0478 | 0.9578 | 0.9530 | 0.9554 | 0.9939 |
| 0.0 | 48.0 | 34800 | 0.0507 | 0.9574 | 0.9515 | 0.9544 | 0.9940 |
| 0.0 | 49.0 | 35525 | 0.0534 | 0.9531 | 0.9534 | 0.9532 | 0.9939 |
| 0.0004 | 50.0 | 36250 | 0.0512 | 0.9541 | 0.9530 | 0.9536 | 0.9941 |
| 0.0001 | 51.0 | 36975 | 0.0478 | 0.9549 | 0.9532 | 0.9541 | 0.9940 |
| 0.0001 | 52.0 | 37700 | 0.0446 | 0.9541 | 0.9555 | 0.9548 | 0.9942 |
| 0.0 | 53.0 | 38425 | 0.0522 | 0.9529 | 0.9509 | 0.9519 | 0.9935 |
| 0.0001 | 54.0 | 39150 | 0.0507 | 0.9552 | 0.9525 | 0.9538 | 0.9937 |
| 0.0003 | 55.0 | 39875 | 0.0493 | 0.9466 | 0.9484 | 0.9475 | 0.9930 |
| 0.0 | 56.0 | 40600 | 0.0496 | 0.9507 | 0.9496 | 0.9501 | 0.9934 |
| 0.0 | 57.0 | 41325 | 0.0502 | 0.9512 | 0.9559 | 0.9535 | 0.9940 |
| 0.0 | 58.0 | 42050 | 0.0528 | 0.9465 | 0.9525 | 0.9494 | 0.9932 |
| 0.0 | 59.0 | 42775 | 0.0578 | 0.9480 | 0.9503 | 0.9492 | 0.9931 |
| 0.0 | 60.0 | 43500 | 0.0557 | 0.9506 | 0.9486 | 0.9496 | 0.9935 |
| 0.0 | 61.0 | 44225 | 0.0487 | 0.9539 | 0.9521 | 0.9530 | 0.9936 |
| 0.0 | 62.0 | 44950 | 0.0519 | 0.9534 | 0.9536 | 0.9535 | 0.9938 |
| 0.0 | 63.0 | 45675 | 0.0532 | 0.9531 | 0.9554 | 0.9542 | 0.9939 |
| 0.0 | 64.0 | 46400 | 0.0572 | 0.9534 | 0.9527 | 0.9530 | 0.9938 |
| 0.0001 | 65.0 | 47125 | 0.0563 | 0.9550 | 0.9527 | 0.9538 | 0.9940 |
| 0.0 | 66.0 | 47850 | 0.0550 | 0.9568 | 0.9507 | 0.9538 | 0.9940 |
| 0.0 | 67.0 | 48575 | 0.0585 | 0.9480 | 0.9542 | 0.9511 | 0.9935 |
| 0.0003 | 68.0 | 49300 | 0.0607 | 0.9501 | 0.9496 | 0.9499 | 0.9936 |
| 0.0 | 69.0 | 50025 | 0.0577 | 0.9529 | 0.9548 | 0.9539 | 0.9939 |
| 0.0 | 70.0 | 50750 | 0.0583 | 0.9541 | 0.9569 | 0.9555 | 0.9941 |
| 0.0001 | 71.0 | 51475 | 0.0549 | 0.9530 | 0.9486 | 0.9508 | 0.9938 |
| 0.0 | 72.0 | 52200 | 0.0592 | 0.9546 | 0.9509 | 0.9528 | 0.9937 |
| 0.0 | 73.0 | 52925 | 0.0598 | 0.9524 | 0.9502 | 0.9513 | 0.9936 |
| 0.0 | 74.0 | 53650 | 0.0583 | 0.9530 | 0.9517 | 0.9523 | 0.9937 |
| 0.0 | 75.0 | 54375 | 0.0602 | 0.9513 | 0.9513 | 0.9513 | 0.9936 |
| 0.0 | 76.0 | 55100 | 0.0624 | 0.9510 | 0.9527 | 0.9518 | 0.9934 |
| 0.0 | 77.0 | 55825 | 0.0622 | 0.9523 | 0.9527 | 0.9525 | 0.9935 |
| 0.0 | 78.0 | 56550 | 0.0599 | 0.9509 | 0.9536 | 0.9522 | 0.9938 |
| 0.0 | 79.0 | 57275 | 0.0599 | 0.9509 | 0.9550 | 0.9529 | 0.9937 |
| 0.0 | 80.0 | 58000 | 0.0588 | 0.9551 | 0.9536 | 0.9544 | 0.9939 |
| 0.0 | 81.0 | 58725 | 0.0581 | 0.9547 | 0.9561 | 0.9554 | 0.9941 |
| 0.0 | 82.0 | 59450 | 0.0587 | 0.9574 | 0.9567 | 0.9571 | 0.9940 |
| 0.0 | 83.0 | 60175 | 0.0592 | 0.9533 | 0.9582 | 0.9558 | 0.9940 |
| 0.0 | 84.0 | 60900 | 0.0602 | 0.9534 | 0.9569 | 0.9551 | 0.9939 |
| 0.0 | 85.0 | 61625 | 0.0601 | 0.9530 | 0.9554 | 0.9542 | 0.9938 |
| 0.0 | 86.0 | 62350 | 0.0608 | 0.9528 | 0.9561 | 0.9545 | 0.9939 |
| 0.0 | 87.0 | 63075 | 0.0606 | 0.9560 | 0.9538 | 0.9549 | 0.9939 |
| 0.0 | 88.0 | 63800 | 0.0590 | 0.9514 | 0.9575 | 0.9544 | 0.9940 |
| 0.0 | 89.0 | 64525 | 0.0611 | 0.9542 | 0.9577 | 0.9559 | 0.9940 |
| 0.0002 | 90.0 | 65250 | 0.0617 | 0.9563 | 0.9567 | 0.9565 | 0.9940 |
| 0.0 | 91.0 | 65975 | 0.0611 | 0.9578 | 0.9555 | 0.9566 | 0.9940 |
| 0.0004 | 92.0 | 66700 | 0.0628 | 0.9510 | 0.9567 | 0.9539 | 0.9939 |
| 0.0 | 93.0 | 67425 | 0.0634 | 0.9523 | 0.9561 | 0.9542 | 0.9939 |
| 0.0 | 94.0 | 68150 | 0.0629 | 0.9534 | 0.9571 | 0.9552 | 0.9940 |
| 0.0 | 95.0 | 68875 | 0.0627 | 0.9523 | 0.9565 | 0.9544 | 0.9940 |
| 0.0 | 96.0 | 69600 | 0.0627 | 0.9528 | 0.9565 | 0.9547 | 0.9940 |
| 0.0 | 97.0 | 70325 | 0.0625 | 0.9536 | 0.9565 | 0.9550 | 0.9940 |
| 0.0 | 98.0 | 71050 | 0.0620 | 0.9558 | 0.9561 | 0.9559 | 0.9941 |
| 0.0 | 99.0 | 71775 | 0.0620 | 0.9543 | 0.9573 | 0.9558 | 0.9940 |
| 0.0 | 100.0 | 72500 | 0.0621 | 0.9538 | 0.9573 | 0.9555 | 0.9940 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.3.0+cu118
- Datasets 3.2.0
- Tokenizers 0.21.0
|
Onkarn/POC_MultiLng-V1 | Onkarn | 2025-01-03T06:33:51Z | 146 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-03T06:32:12Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Charan-2714M/llama3-8b-instruct-ipc-sections | Charan-2714M | 2025-01-03T06:28:19Z | 75 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-01-02T15:45:32Z | ---
library_name: transformers
tags: [text-generation]
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Harikrishnan46624/finetuned_llama2-1.1b-chat | Harikrishnan46624 | 2025-01-03T06:25:51Z | 47 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"AI",
"NLP",
"LLM",
"ML",
"Generative AI",
"text2text-generation",
"en",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-11-22T05:30:22Z | ---
library_name: transformers
tags:
- AI
- NLP
- LLM
- ML
- Generative AI
language:
- en
metrics:
- accuracy
- bleu
base_model:
- TinyLlama/TinyLlama-1.1B-Chat-v1.0
pipeline_tag: text2text-generation
---
# Model Card for TinyLlama-1.1B Fine-tuned on NLP, ML, Generative AI, and Computer Vision Q&A
This model is fine-tuned from the **TinyLlama-1.1B** base model to provide answers to domain-specific questions in **Natural Language Processing (NLP)**, **Machine Learning (ML)**, **Deep Learning (DL)**, **Generative AI**, and **Computer Vision (CV)**. It generates accurate and context-aware responses, making it suitable for educational, research, and professional applications.
---
## Model Details
### Model Description
This model excels in providing concise, domain-specific answers to questions in AI-related fields. Leveraging the powerful TinyLlama architecture and fine-tuning on a curated dataset of Q&A pairs, it ensures relevance and coherence in responses.
- **Developed by:** Harikrishnan46624
- **Funded by:** Self-funded
- **Shared by:** Harikrishnan46624
- **Model Type:** Text-to-Text Generation
- **Language(s):** English
- **License:** Apache 2.0
- **Fine-tuned from:** TinyLlama-1.1B
---
### Model Sources
- **Repository:** [Fine-Tuning Notebook on GitHub](https://github.com/Harikrishnan46624/EduBotIQ/blob/main/Fine_tune/TinyLlama_fine_tuning.ipynb)
- **Demo:** [Demo Link to be Added]
---
## Use Cases
### Direct Use
- Answering technical questions in **AI**, **ML**, **DL**, **LLMs**, **Generative AI**, and **Computer Vision**.
- Supporting educational content creation, research discussions, and technical documentation.
### Downstream Use
- Fine-tuning for industry-specific applications like healthcare, finance, or legal tech.
- Integrating into specialized chatbots, virtual assistants, or automated knowledge bases.
### Out-of-Scope Use
- Generating non-English responses (English-only capability).
- Handling non-technical, unrelated queries outside the AI domain.
---
## Bias, Risks, and Limitations
- **Bias:** Trained on domain-specific datasets, the model may exhibit biases toward AI-related terminologies or fail to generalize well in other domains.
- **Risks:** May generate incorrect or misleading information if the query is ambiguous or goes beyond the model’s scope.
- **Limitations:** May struggle with highly complex or nuanced queries not covered in its fine-tuning data.
---
### Recommendations
- For critical or high-stakes applications, it’s recommended to use the model with human oversight.
- Regularly update the fine-tuning datasets to ensure alignment with the latest research and advancements in AI.
---
## How to Get Started
To use the model, install the `transformers` library and use the following code snippet:
```python
from transformers import pipeline
# Load the model
model = pipeline("text2text-generation", model="TinyLlama/TinyLlama-1.1B-Chat-v1.0")
# Generate a response
output = model("What is the difference between supervised and unsupervised learning?")
print(output)
|
taareshg/Llama-3.2-3B-Instruct-En-Hi-merge-50k-new | taareshg | 2025-01-03T06:21:24Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-03T06:16:34Z | ---
base_model: unsloth/Llama-3.2-3B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** taareshg
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
yahyaabd/allstats-semantic-search-model-v1 | yahyaabd | 2025-01-03T06:15:40Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:212917",
"loss:CosineSimilarityLoss",
"dataset:yahyaabd/allstats-semantic-search-synthetic-dataset-v1",
"arxiv:1908.10084",
"base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-01-03T06:13:59Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:212917
- loss:CosineSimilarityLoss
base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
widget:
- source_sentence: statistik neraca arus dana indonesia
sentences:
- Statistik Kelapa Sawit Indonesia 2012
- Distribusi Perdagangan Komoditas Kedelai Indonesia 2023
- Data Runtun Statistik Konstruksi 1990-2010
- source_sentence: Seberapa besar kenaikan produksi IBS pada Triwulan IV Tahun 2013
dibandingkan Triwulan IV Tahun Sebelumnya?
sentences:
- Pertumbuhan PDB 2013 Mencapai 5,78 Persen
- Statistik Komuter Gerbangkertosusila Hasil Survei Komuter Gerbangkertosusila 2017
- Statistik Penduduk Lanjut Usia Provinsi Jawa Timur 2010-Hasil Sensus Penduduk
2010
- source_sentence: 'Penduduk Papua: migrasi 2015'
sentences:
- Rata-rata Upah/Gaji Bersih sebulan Buruh/Karyawan Pegawai Menurut Pendidikan Tertinggi
dan jenis pekerjaan utama, 2019
- Statistik Pemotongan Ternak 2010 dan 2011
- Statistik Harga Produsen Pertanian Sub Sektor Tanaman Pangan, Hortikultura dan
Perkebunan Rakyat 2010
- source_sentence: statistik konstruksi 2022
sentences:
- Studi Modal Sosial 2006
- BRS upah buruh agustus 2018
- Statistik Perdagangan Luar Negeri Indonesia Ekspor 2006 vol 1
- source_sentence: Statistik ekspor Indonesia Maret 2202
sentences:
- Produk Domestik Bruto Indonesia Triwulanan 2006-2010
- Indeks Perilaku Anti Korupsi (IPAK) Indonesia 2023 sebesar 3,92, menurun dibandingkan
IPAK 2022
- Buletin Statistik Perdagangan Luar Negeri Ekspor Menurut HS, Januari 2023
datasets:
- yahyaabd/allstats-semantic-search-synthetic-dataset-v1
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
model-index:
- name: SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: allstats semantic search v1 dev
type: allstats-semantic-search-v1-dev
metrics:
- type: pearson_cosine
value: 0.9894566758405579
name: Pearson Cosine
- type: spearman_cosine
value: 0.9072484378842124
name: Spearman Cosine
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: allstat semantic search v1 test
type: allstat-semantic-search-v1-test
metrics:
- type: pearson_cosine
value: 0.9895284407960067
name: Pearson Cosine
- type: spearman_cosine
value: 0.9074137706349162
name: Spearman Cosine
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) on the [allstats-semantic-search-synthetic-dataset-v1](https://huggingface.co/datasets/yahyaabd/allstats-semantic-search-synthetic-dataset-v1) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) <!-- at revision 75c57757a97f90ad739aca51fa8bfea0e485a7f2 -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [allstats-semantic-search-synthetic-dataset-v1](https://huggingface.co/datasets/yahyaabd/allstats-semantic-search-synthetic-dataset-v1)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("yahyaabd/allstats-semantic-search-model-v1")
# Run inference
sentences = [
'Statistik ekspor Indonesia Maret 2202',
'Produk Domestik Bruto Indonesia Triwulanan 2006-2010',
'Buletin Statistik Perdagangan Luar Negeri Ekspor Menurut HS, Januari 2023',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Datasets: `allstats-semantic-search-v1-dev` and `allstat-semantic-search-v1-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | allstats-semantic-search-v1-dev | allstat-semantic-search-v1-test |
|:--------------------|:--------------------------------|:--------------------------------|
| pearson_cosine | 0.9895 | 0.9895 |
| **spearman_cosine** | **0.9072** | **0.9074** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### allstats-semantic-search-synthetic-dataset-v1
* Dataset: [allstats-semantic-search-synthetic-dataset-v1](https://huggingface.co/datasets/yahyaabd/allstats-semantic-search-synthetic-dataset-v1) at [06f849a](https://huggingface.co/datasets/yahyaabd/allstats-semantic-search-synthetic-dataset-v1/tree/06f849af5602fea6ce00e5ecdd9a99cd0cafc2de)
* Size: 212,917 training samples
* Columns: <code>query</code>, <code>doc</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | query | doc | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 5 tokens</li><li>mean: 11.48 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 14.89 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.52</li><li>max: 1.0</li></ul> |
* Samples:
| query | doc | label |
|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------|:------------------|
| <code>ringkasan aktivitas badan pusat statistik tahun 2018</code> | <code>Statistik Harga Produsen sektor pertanian di indonesia 2008</code> | <code>0.1</code> |
| <code>indikator kesejahteraan petani rejang lebong 2015</code> | <code>Diagram Timbang Nilai Tukar Petani Kabupaten Rejang Lebong 2015</code> | <code>0.82</code> |
| <code>Berapa persen kenaikan kunjungan wisatawan mancanegara pada April 2024?</code> | <code>Indeks Perilaku Anti Korupsi (IPAK) Indonesia 2023 sebesar 3,92, menurun dibandingkan IPAK 2022</code> | <code>0.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### allstats-semantic-search-synthetic-dataset-v1
* Dataset: [allstats-semantic-search-synthetic-dataset-v1](https://huggingface.co/datasets/yahyaabd/allstats-semantic-search-synthetic-dataset-v1) at [06f849a](https://huggingface.co/datasets/yahyaabd/allstats-semantic-search-synthetic-dataset-v1/tree/06f849af5602fea6ce00e5ecdd9a99cd0cafc2de)
* Size: 26,614 evaluation samples
* Columns: <code>query</code>, <code>doc</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | query | doc | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 5 tokens</li><li>mean: 11.21 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 14.41 tokens</li><li>max: 54 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
* Samples:
| query | doc | label |
|:-----------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------|:------------------|
| <code>Laporan bulanan ekonomi Indonesia bulan November 201</code> | <code>Laporan Bulanan Data Sosial Ekonomi November 2021</code> | <code>0.92</code> |
| <code>pekerjaan layak di indonesia tahun 2022: data dan analisis</code> | <code>Statistik Penduduk Lanjut Usia Provinsi Papua Barat 2010-Hasil Sensus Penduduk 2010</code> | <code>0.09</code> |
| <code>Tabel pendapatan rata-rata pekerja lepas berdasarkan provinsi dan pendidikan tahun 2021</code> | <code>Nilai Impor Kendaraan Bermotor Menurut Negara Asal Utama (Nilai CIF:juta US$), 2018-2023</code> | <code>0.1</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | allstats-semantic-search-v1-dev_spearman_cosine | allstat-semantic-search-v1-test_spearman_cosine |
|:------:|:-----:|:-------------:|:---------------:|:-----------------------------------------------:|:-----------------------------------------------:|
| 0.0376 | 250 | 0.0683 | 0.0432 | 0.6955 | - |
| 0.0751 | 500 | 0.0393 | 0.0322 | 0.7230 | - |
| 0.1127 | 750 | 0.0321 | 0.0270 | 0.7476 | - |
| 0.1503 | 1000 | 0.0255 | 0.0226 | 0.7789 | - |
| 0.1879 | 1250 | 0.024 | 0.0213 | 0.7683 | - |
| 0.2254 | 1500 | 0.022 | 0.0199 | 0.7727 | - |
| 0.2630 | 1750 | 0.0219 | 0.0195 | 0.7853 | - |
| 0.3006 | 2000 | 0.0202 | 0.0188 | 0.7795 | - |
| 0.3381 | 2250 | 0.0191 | 0.0187 | 0.7943 | - |
| 0.3757 | 2500 | 0.0198 | 0.0178 | 0.7842 | - |
| 0.4133 | 2750 | 0.0179 | 0.0184 | 0.7974 | - |
| 0.4509 | 3000 | 0.0179 | 0.0194 | 0.7810 | - |
| 0.4884 | 3250 | 0.0182 | 0.0168 | 0.8080 | - |
| 0.5260 | 3500 | 0.0174 | 0.0164 | 0.8131 | - |
| 0.5636 | 3750 | 0.0174 | 0.0154 | 0.8113 | - |
| 0.6011 | 4000 | 0.0169 | 0.0157 | 0.7981 | - |
| 0.6387 | 4250 | 0.0152 | 0.0146 | 0.8099 | - |
| 0.6763 | 4500 | 0.0148 | 0.0147 | 0.8091 | - |
| 0.7139 | 4750 | 0.0145 | 0.0145 | 0.8178 | - |
| 0.7514 | 5000 | 0.014 | 0.0139 | 0.8184 | - |
| 0.7890 | 5250 | 0.0145 | 0.0130 | 0.8166 | - |
| 0.8266 | 5500 | 0.0134 | 0.0129 | 0.8306 | - |
| 0.8641 | 5750 | 0.013 | 0.0122 | 0.8251 | - |
| 0.9017 | 6000 | 0.0136 | 0.0130 | 0.8265 | - |
| 0.9393 | 6250 | 0.0123 | 0.0126 | 0.8224 | - |
| 0.9769 | 6500 | 0.0113 | 0.0120 | 0.8305 | - |
| 1.0144 | 6750 | 0.0129 | 0.0117 | 0.8204 | - |
| 1.0520 | 7000 | 0.0106 | 0.0116 | 0.8284 | - |
| 1.0896 | 7250 | 0.01 | 0.0116 | 0.8303 | - |
| 1.1271 | 7500 | 0.0096 | 0.0110 | 0.8303 | - |
| 1.1647 | 7750 | 0.01 | 0.0113 | 0.8305 | - |
| 1.2023 | 8000 | 0.0116 | 0.0108 | 0.8300 | - |
| 1.2399 | 8250 | 0.0095 | 0.0104 | 0.8432 | - |
| 1.2774 | 8500 | 0.009 | 0.0104 | 0.8370 | - |
| 1.3150 | 8750 | 0.0101 | 0.0102 | 0.8434 | - |
| 1.3526 | 9000 | 0.01 | 0.0097 | 0.8450 | - |
| 1.3901 | 9250 | 0.0097 | 0.0103 | 0.8286 | - |
| 1.4277 | 9500 | 0.0092 | 0.0096 | 0.8393 | - |
| 1.4653 | 9750 | 0.0093 | 0.0089 | 0.8480 | - |
| 1.5029 | 10000 | 0.0088 | 0.0090 | 0.8439 | - |
| 1.5404 | 10250 | 0.0087 | 0.0089 | 0.8569 | - |
| 1.5780 | 10500 | 0.0082 | 0.0088 | 0.8488 | - |
| 1.6156 | 10750 | 0.009 | 0.0089 | 0.8493 | - |
| 1.6531 | 11000 | 0.0086 | 0.0086 | 0.8499 | - |
| 1.6907 | 11250 | 0.0076 | 0.0083 | 0.8600 | - |
| 1.7283 | 11500 | 0.0076 | 0.0081 | 0.8621 | - |
| 1.7659 | 11750 | 0.0079 | 0.0081 | 0.8611 | - |
| 1.8034 | 12000 | 0.0082 | 0.0085 | 0.8540 | - |
| 1.8410 | 12250 | 0.0074 | 0.0081 | 0.8620 | - |
| 1.8786 | 12500 | 0.007 | 0.0080 | 0.8639 | - |
| 1.9161 | 12750 | 0.0071 | 0.0083 | 0.8450 | - |
| 1.9537 | 13000 | 0.007 | 0.0076 | 0.8585 | - |
| 1.9913 | 13250 | 0.0072 | 0.0074 | 0.8640 | - |
| 2.0289 | 13500 | 0.0055 | 0.0069 | 0.8699 | - |
| 2.0664 | 13750 | 0.0056 | 0.0068 | 0.8673 | - |
| 2.1040 | 14000 | 0.0052 | 0.0066 | 0.8723 | - |
| 2.1416 | 14250 | 0.0059 | 0.0069 | 0.8644 | - |
| 2.1791 | 14500 | 0.0055 | 0.0068 | 0.8670 | - |
| 2.2167 | 14750 | 0.005 | 0.0065 | 0.8723 | - |
| 2.2543 | 15000 | 0.0053 | 0.0066 | 0.8766 | - |
| 2.2919 | 15250 | 0.0057 | 0.0065 | 0.8782 | - |
| 2.3294 | 15500 | 0.0053 | 0.0064 | 0.8749 | - |
| 2.3670 | 15750 | 0.0056 | 0.0070 | 0.8708 | - |
| 2.4046 | 16000 | 0.0058 | 0.0065 | 0.8731 | - |
| 2.4421 | 16250 | 0.0047 | 0.0064 | 0.8793 | - |
| 2.4797 | 16500 | 0.0049 | 0.0063 | 0.8801 | - |
| 2.5173 | 16750 | 0.0051 | 0.0063 | 0.8782 | - |
| 2.5549 | 17000 | 0.0053 | 0.0060 | 0.8799 | - |
| 2.5924 | 17250 | 0.0051 | 0.0059 | 0.8825 | - |
| 2.6300 | 17500 | 0.0048 | 0.0060 | 0.8761 | - |
| 2.6676 | 17750 | 0.0055 | 0.0055 | 0.8773 | - |
| 2.7051 | 18000 | 0.0045 | 0.0053 | 0.8833 | - |
| 2.7427 | 18250 | 0.0041 | 0.0053 | 0.8868 | - |
| 2.7803 | 18500 | 0.0051 | 0.0054 | 0.8811 | - |
| 2.8179 | 18750 | 0.004 | 0.0052 | 0.8881 | - |
| 2.8554 | 19000 | 0.0043 | 0.0053 | 0.8764 | - |
| 2.8930 | 19250 | 0.0047 | 0.0051 | 0.8874 | - |
| 2.9306 | 19500 | 0.0038 | 0.0051 | 0.8922 | - |
| 2.9681 | 19750 | 0.0047 | 0.0050 | 0.8821 | - |
| 3.0057 | 20000 | 0.0037 | 0.0048 | 0.8911 | - |
| 3.0433 | 20250 | 0.0031 | 0.0048 | 0.8911 | - |
| 3.0809 | 20500 | 0.0032 | 0.0046 | 0.8934 | - |
| 3.1184 | 20750 | 0.0034 | 0.0046 | 0.8942 | - |
| 3.1560 | 21000 | 0.0028 | 0.0045 | 0.8976 | - |
| 3.1936 | 21250 | 0.0034 | 0.0045 | 0.8932 | - |
| 3.2311 | 21500 | 0.003 | 0.0044 | 0.8959 | - |
| 3.2687 | 21750 | 0.0033 | 0.0044 | 0.8961 | - |
| 3.3063 | 22000 | 0.0029 | 0.0043 | 0.8995 | - |
| 3.3439 | 22250 | 0.0029 | 0.0044 | 0.8978 | - |
| 3.3814 | 22500 | 0.0027 | 0.0043 | 0.8998 | - |
| 3.4190 | 22750 | 0.003 | 0.0043 | 0.9019 | - |
| 3.4566 | 23000 | 0.0027 | 0.0042 | 0.8982 | - |
| 3.4941 | 23250 | 0.0027 | 0.0042 | 0.9014 | - |
| 3.5317 | 23500 | 0.0034 | 0.0042 | 0.9025 | - |
| 3.5693 | 23750 | 0.003 | 0.0041 | 0.9027 | - |
| 3.6069 | 24000 | 0.0029 | 0.0041 | 0.9003 | - |
| 3.6444 | 24250 | 0.0027 | 0.0040 | 0.9023 | - |
| 3.6820 | 24500 | 0.0027 | 0.0040 | 0.9035 | - |
| 3.7196 | 24750 | 0.0033 | 0.0040 | 0.9042 | - |
| 3.7571 | 25000 | 0.0028 | 0.0039 | 0.9053 | - |
| 3.7947 | 25250 | 0.0027 | 0.0039 | 0.9049 | - |
| 3.8323 | 25500 | 0.0033 | 0.0039 | 0.9057 | - |
| 3.8699 | 25750 | 0.0025 | 0.0039 | 0.9075 | - |
| 3.9074 | 26000 | 0.003 | 0.0039 | 0.9068 | - |
| 3.9450 | 26250 | 0.0026 | 0.0039 | 0.9073 | - |
| 3.9826 | 26500 | 0.0023 | 0.0038 | 0.9072 | - |
| 4.0 | 26616 | - | - | - | 0.9074 |
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.47.1
- PyTorch: 2.2.2+cu121
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
Jopqior/sft-model-tmp | Jopqior | 2025-01-03T06:10:06Z | 148 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-03T06:09:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Shutto/RagChatbotAssistantForQA | Shutto | 2025-01-03T06:08:30Z | 90 | 1 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-12-10T01:34:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [Shelton Simbi]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shaheercp/Dulquersalman | shaheercp | 2025-01-03T06:05:40Z | 13 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-01-03T05:19:18Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: DULQUERSALMAN
---
# Dulquersalman
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `DULQUERSALMAN` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('shaheercp/Dulquersalman', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mradermacher/Qwen2.5-14B-Kebab-v0-GGUF | mradermacher | 2025-01-03T05:59:20Z | 179 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Hasnonname/Qwen2.5-14B-Kebab-v0",
"base_model:quantized:Hasnonname/Qwen2.5-14B-Kebab-v0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-03T04:24:11Z | ---
base_model: Hasnonname/Qwen2.5-14B-Kebab-v0
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Hasnonname/Qwen2.5-14B-Kebab-v0
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Kebab-v0-GGUF/resolve/main/Qwen2.5-14B-Kebab-v0.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Kebab-v0-GGUF/resolve/main/Qwen2.5-14B-Kebab-v0.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Kebab-v0-GGUF/resolve/main/Qwen2.5-14B-Kebab-v0.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Kebab-v0-GGUF/resolve/main/Qwen2.5-14B-Kebab-v0.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Kebab-v0-GGUF/resolve/main/Qwen2.5-14B-Kebab-v0.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Kebab-v0-GGUF/resolve/main/Qwen2.5-14B-Kebab-v0.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Kebab-v0-GGUF/resolve/main/Qwen2.5-14B-Kebab-v0.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Kebab-v0-GGUF/resolve/main/Qwen2.5-14B-Kebab-v0.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Kebab-v0-GGUF/resolve/main/Qwen2.5-14B-Kebab-v0.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Kebab-v0-GGUF/resolve/main/Qwen2.5-14B-Kebab-v0.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Kebab-v0-GGUF/resolve/main/Qwen2.5-14B-Kebab-v0.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
John6666/coco-illustrious-noobai-style-v50-sdxl | John6666 | 2025-01-03T05:58:25Z | 2,637 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"character",
"high quality without LoRA",
"girls",
"cute",
"posing",
"background",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-Vpred-0.65s",
"base_model:finetune:Laxhar/noobai-XL-Vpred-0.65s",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2025-01-03T05:52:40Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- character
- high quality without LoRA
- girls
- cute
- posing
- background
- illustrious
base_model: Laxhar/noobai-XL-Vpred-0.65s
---
Original model is [here](https://civitai.com/models/955253/coco-illustrious-noobai-xl-style?modelVersionId=1233363).
This model created by [COCO_OIOI01](https://civitai.com/user/COCO_OIOI01).
|
mradermacher/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN-GGUF | mradermacher | 2025-01-03T05:51:26Z | 297 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:netcat420/MFANN",
"base_model:netcat420/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN",
"base_model:quantized:netcat420/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-03T04:44:18Z | ---
base_model: netcat420/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN
datasets:
- netcat420/MFANN
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/netcat420/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN-GGUF/resolve/main/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN-GGUF/resolve/main/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN-GGUF/resolve/main/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN-GGUF/resolve/main/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN-GGUF/resolve/main/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN-GGUF/resolve/main/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN-GGUF/resolve/main/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN-GGUF/resolve/main/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN-GGUF/resolve/main/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN-GGUF/resolve/main/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN-GGUF/resolve/main/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN-GGUF/resolve/main/Qwen2.5-7B-nerd-uncensored-v0.9-MFANN.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
wisenut-nlp-team/Wisedom-8B-EmbeddingReordering | wisenut-nlp-team | 2025-01-03T05:37:06Z | 1,906 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-27T00:54:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
3.1 base
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
wisenut-nlp-team/Wisedom-8B-VocabExpansion | wisenut-nlp-team | 2025-01-03T05:36:40Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-27T02:32:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
3.0 base
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
daweezy/turbov2 | daweezy | 2025-01-03T05:17:03Z | 127 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-01-03T04:31:36Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: turbo
---
# Turbov2
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `turbo` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('daweezy/turbov2', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mradermacher/QwQ-32B-Preview-abliterated-linear50-GGUF | mradermacher | 2025-01-03T05:15:25Z | 29 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"mergekit",
"merge",
"en",
"base_model:pipihand01/QwQ-32B-Preview-abliterated-linear50",
"base_model:quantized:pipihand01/QwQ-32B-Preview-abliterated-linear50",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-03T03:27:29Z | ---
base_model: pipihand01/QwQ-32B-Preview-abliterated-linear50
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/pipihand01/QwQ-32B-Preview-abliterated-linear50/blob/main/LICENSE
quantized_by: mradermacher
tags:
- chat
- abliterated
- uncensored
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/pipihand01/QwQ-32B-Preview-abliterated-linear50
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear50-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear50.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear50-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear50.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear50-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear50.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear50-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear50.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear50-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear50.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear50-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear50.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear50-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear50.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear50-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear50.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear50-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear50.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear50-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear50.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear50-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear50.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
snowian/ImageNet_32_btViT_256_4_99 | snowian | 2025-01-03T05:05:26Z | 5 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
]
| null | 2025-01-03T05:05:21Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
VitoCorleone72/Franny | VitoCorleone72 | 2025-01-03T05:05:07Z | 99 | 1 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
]
| text-to-image | 2025-01-03T05:04:58Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: Francesca , wearing white knitted sweatshirt, smiling, ginger hair
output:
url: images/135634213.png
- text: Francesca, buisness attire, buinsess room
output:
url: images/3516371131.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Francesca
---
# Franny
<Gallery />
## Model description
Franny
## Trigger words
You should use `Francesca` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/VitoCorleone72/Franny/tree/main) them in the Files & versions tab.
|
tonileonar/leonartoni | tonileonar | 2025-01-03T05:03:33Z | 10 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-01-03T03:07:51Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: l3on@r
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# leonartoni
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `l3on@r` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
tuanna08go/01f4268a-9c46-4354-87d8-b3828851bd8b | tuanna08go | 2025-01-03T04:49:28Z | 23 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-160m",
"base_model:adapter:JackFram/llama-160m",
"license:apache-2.0",
"region:us"
]
| null | 2025-01-03T04:40:34Z | ---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 01f4268a-9c46-4354-87d8-b3828851bd8b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: JackFram/llama-160m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e9a5de46d030ae07_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e9a5de46d030ae07_train_data.json
type:
field_input: user_prompt
field_instruction: system_prompt
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: false
group_by_length: false
hub_model_id: tuanna08go/01f4268a-9c46-4354-87d8-b3828851bd8b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 8
mlflow_experiment_name: /tmp/e9a5de46d030ae07_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 01f4268a-9c46-4354-87d8-b3828851bd8b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 01f4268a-9c46-4354-87d8-b3828851bd8b
warmup_steps: 2
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 01f4268a-9c46-4354-87d8-b3828851bd8b
This model is a fine-tuned version of [JackFram/llama-160m](https://huggingface.co/JackFram/llama-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 2
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0008 | 1 | 3.1399 |
| 2.8562 | 0.0076 | 10 | 3.0681 |
| 2.7322 | 0.0152 | 20 | 2.9421 |
| 2.6651 | 0.0228 | 30 | 2.8514 |
| 2.5504 | 0.0304 | 40 | 2.8079 |
| 2.5387 | 0.0380 | 50 | 2.8006 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
snowian/ImageNet_32_btViT_256_4_97 | snowian | 2025-01-03T04:49:23Z | 5 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
]
| null | 2025-01-03T04:49:17Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
Subsets and Splits