modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 12:29:30
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 548
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 12:29:18
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
salbatarni/arabert_baseline_grammar_task6_fold1
|
salbatarni
| 2024-08-29T07:06:54Z | 5 | 0 | null |
[
"tensorboard",
"safetensors",
"bert",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"region:us"
] | null | 2024-08-29T07:05:44Z |
---
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: arabert_baseline_grammar_task6_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# arabert_baseline_grammar_task6_fold1
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7851
- Qwk: 0.7296
- Mse: 0.7851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|
| No log | 0.5 | 2 | 1.4135 | -0.0678 | 1.4135 |
| No log | 1.0 | 4 | 0.9828 | 0.5032 | 0.9828 |
| No log | 1.5 | 6 | 1.1603 | 0.3288 | 1.1603 |
| No log | 2.0 | 8 | 1.0440 | 0.2036 | 1.0440 |
| No log | 2.5 | 10 | 0.8854 | 0.2383 | 0.8854 |
| No log | 3.0 | 12 | 0.7492 | 0.4348 | 0.7492 |
| No log | 3.5 | 14 | 0.7513 | 0.4348 | 0.7513 |
| No log | 4.0 | 16 | 0.8038 | 0.6500 | 0.8038 |
| No log | 4.5 | 18 | 0.8093 | 0.6500 | 0.8093 |
| No log | 5.0 | 20 | 0.8924 | 0.6847 | 0.8924 |
| No log | 5.5 | 22 | 1.0282 | 0.6805 | 1.0282 |
| No log | 6.0 | 24 | 0.9757 | 0.7296 | 0.9757 |
| No log | 6.5 | 26 | 0.9823 | 0.7296 | 0.9823 |
| No log | 7.0 | 28 | 0.9823 | 0.6957 | 0.9823 |
| No log | 7.5 | 30 | 0.8153 | 0.7296 | 0.8153 |
| No log | 8.0 | 32 | 0.7571 | 0.6192 | 0.7571 |
| No log | 8.5 | 34 | 0.7411 | 0.7014 | 0.7411 |
| No log | 9.0 | 36 | 0.7685 | 0.7296 | 0.7685 |
| No log | 9.5 | 38 | 0.7773 | 0.7296 | 0.7773 |
| No log | 10.0 | 40 | 0.7851 | 0.7296 | 0.7851 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
salbatarni/arabert_baseline_grammar_task5_fold1
|
salbatarni
| 2024-08-29T07:04:35Z | 5 | 0 | null |
[
"tensorboard",
"safetensors",
"bert",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"region:us"
] | null | 2024-08-29T07:03:00Z |
---
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: arabert_baseline_grammar_task5_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# arabert_baseline_grammar_task5_fold1
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4829
- Qwk: 0.6262
- Mse: 0.4829
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|
| No log | 0.3333 | 2 | 2.4374 | 0.0250 | 2.4374 |
| No log | 0.6667 | 4 | 0.9732 | -0.0090 | 0.9732 |
| No log | 1.0 | 6 | 0.5943 | 0.3165 | 0.5943 |
| No log | 1.3333 | 8 | 0.5108 | 0.3182 | 0.5108 |
| No log | 1.6667 | 10 | 0.4747 | 0.3137 | 0.4747 |
| No log | 2.0 | 12 | 0.4634 | 0.3165 | 0.4634 |
| No log | 2.3333 | 14 | 0.4798 | 0.4509 | 0.4798 |
| No log | 2.6667 | 16 | 0.4795 | 0.4737 | 0.4795 |
| No log | 3.0 | 18 | 0.5467 | 0.5327 | 0.5467 |
| No log | 3.3333 | 20 | 0.5831 | 0.5327 | 0.5831 |
| No log | 3.6667 | 22 | 0.5213 | 0.6269 | 0.5213 |
| No log | 4.0 | 24 | 0.6213 | 0.7087 | 0.6213 |
| No log | 4.3333 | 26 | 0.6774 | 0.7236 | 0.6774 |
| No log | 4.6667 | 28 | 0.6694 | 0.7236 | 0.6694 |
| No log | 5.0 | 30 | 0.5668 | 0.7 | 0.5668 |
| No log | 5.3333 | 32 | 0.5235 | 0.7059 | 0.5235 |
| No log | 5.6667 | 34 | 0.5216 | 0.7059 | 0.5216 |
| No log | 6.0 | 36 | 0.5070 | 0.5957 | 0.5070 |
| No log | 6.3333 | 38 | 0.5038 | 0.6047 | 0.5038 |
| No log | 6.6667 | 40 | 0.5220 | 0.6606 | 0.5220 |
| No log | 7.0 | 42 | 0.5420 | 0.6377 | 0.5420 |
| No log | 7.3333 | 44 | 0.5474 | 0.6667 | 0.5474 |
| No log | 7.6667 | 46 | 0.5400 | 0.6262 | 0.5400 |
| No log | 8.0 | 48 | 0.5341 | 0.6262 | 0.5341 |
| No log | 8.3333 | 50 | 0.5282 | 0.6262 | 0.5282 |
| No log | 8.6667 | 52 | 0.5146 | 0.6262 | 0.5146 |
| No log | 9.0 | 54 | 0.4982 | 0.6262 | 0.4982 |
| No log | 9.3333 | 56 | 0.4857 | 0.6262 | 0.4857 |
| No log | 9.6667 | 58 | 0.4846 | 0.6262 | 0.4846 |
| No log | 10.0 | 60 | 0.4829 | 0.6262 | 0.4829 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
salbatarni/arabert_baseline_grammar_task3_fold0
|
salbatarni
| 2024-08-29T06:58:31Z | 6 | 0 | null |
[
"tensorboard",
"safetensors",
"bert",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"region:us"
] | null | 2024-08-29T06:57:33Z |
---
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: arabert_baseline_grammar_task3_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# arabert_baseline_grammar_task3_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6034
- Qwk: 0.0
- Mse: 0.6123
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|
| No log | 0.6667 | 2 | 3.3662 | -0.0117 | 3.3080 |
| No log | 1.3333 | 4 | 1.1203 | -0.1172 | 1.0755 |
| No log | 2.0 | 6 | 0.8683 | -0.1134 | 0.8544 |
| No log | 2.6667 | 8 | 0.6489 | -0.0476 | 0.6406 |
| No log | 3.3333 | 10 | 0.4735 | -0.0476 | 0.4763 |
| No log | 4.0 | 12 | 0.3822 | -0.0476 | 0.3821 |
| No log | 4.6667 | 14 | 0.5267 | -0.0476 | 0.5316 |
| No log | 5.3333 | 16 | 0.4612 | -0.0476 | 0.4639 |
| No log | 6.0 | 18 | 0.3363 | 0.3529 | 0.3349 |
| No log | 6.6667 | 20 | 0.3888 | -0.0476 | 0.3896 |
| No log | 7.3333 | 22 | 0.4637 | -0.0476 | 0.4673 |
| No log | 8.0 | 24 | 0.5117 | 0.0 | 0.5172 |
| No log | 8.6667 | 26 | 0.5877 | 0.0 | 0.5954 |
| No log | 9.3333 | 28 | 0.6138 | 0.0 | 0.6226 |
| No log | 10.0 | 30 | 0.6034 | 0.0 | 0.6123 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
luaqi/sn29_merged_v10
|
luaqi
| 2024-08-29T06:55:28Z | 34 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-29T06:52:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kaitchup/Meta-Llama-3.1-70B-Instruct-ExLlamaV2-4bit
|
kaitchup
| 2024-08-29T06:36:27Z | 5 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"exl2",
"region:us"
] | null | 2024-08-20T05:32:51Z |
Llama 3.1 70B instruct quantized with ExLlamaV2.
|
salbatarni/arabert_baseline_mechanics_task1_fold0
|
salbatarni
| 2024-08-29T06:34:22Z | 5 | 0 | null |
[
"tensorboard",
"safetensors",
"bert",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"region:us"
] | null | 2024-08-29T06:32:42Z |
---
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: arabert_baseline_mechanics_task1_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# arabert_baseline_mechanics_task1_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6765
- Qwk: 0.4590
- Mse: 0.6922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|
| No log | 0.3333 | 2 | 4.5225 | -0.0658 | 4.5651 |
| No log | 0.6667 | 4 | 1.8523 | 0.1370 | 1.8774 |
| No log | 1.0 | 6 | 0.9383 | 0.2326 | 0.9611 |
| No log | 1.3333 | 8 | 0.8679 | 0.3119 | 0.8947 |
| No log | 1.6667 | 10 | 0.8994 | 0.4015 | 0.9273 |
| No log | 2.0 | 12 | 1.2109 | 0.0582 | 1.2468 |
| No log | 2.3333 | 14 | 1.0652 | 0.0783 | 1.0979 |
| No log | 2.6667 | 16 | 0.8918 | 0.4726 | 0.9192 |
| No log | 3.0 | 18 | 0.9069 | 0.3931 | 0.9346 |
| No log | 3.3333 | 20 | 0.8533 | 0.3636 | 0.8812 |
| No log | 3.6667 | 22 | 0.7332 | 0.4582 | 0.7567 |
| No log | 4.0 | 24 | 0.7505 | 0.5333 | 0.7733 |
| No log | 4.3333 | 26 | 0.7390 | 0.475 | 0.7612 |
| No log | 4.6667 | 28 | 0.7760 | 0.3824 | 0.8000 |
| No log | 5.0 | 30 | 0.8078 | 0.3913 | 0.8335 |
| No log | 5.3333 | 32 | 0.7440 | 0.4450 | 0.7671 |
| No log | 5.6667 | 34 | 0.7175 | 0.4906 | 0.7395 |
| No log | 6.0 | 36 | 0.7050 | 0.5254 | 0.7261 |
| No log | 6.3333 | 38 | 0.7088 | 0.4450 | 0.7299 |
| No log | 6.6667 | 40 | 0.6952 | 0.4727 | 0.7141 |
| No log | 7.0 | 42 | 0.6847 | 0.4770 | 0.7014 |
| No log | 7.3333 | 44 | 0.6839 | 0.5054 | 0.7003 |
| No log | 7.6667 | 46 | 0.6830 | 0.4822 | 0.6983 |
| No log | 8.0 | 48 | 0.6803 | 0.4822 | 0.6954 |
| No log | 8.3333 | 50 | 0.6819 | 0.4279 | 0.6979 |
| No log | 8.6667 | 52 | 0.6851 | 0.4360 | 0.7017 |
| No log | 9.0 | 54 | 0.6844 | 0.4360 | 0.7010 |
| No log | 9.3333 | 56 | 0.6804 | 0.4360 | 0.6965 |
| No log | 9.6667 | 58 | 0.6769 | 0.4590 | 0.6926 |
| No log | 10.0 | 60 | 0.6765 | 0.4590 | 0.6922 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
YanqiDai/MMRole-Eval_RM
|
YanqiDai
| 2024-08-29T06:31:43Z | 5 | 1 | null |
[
"pytorch",
"qwen",
"custom_code",
"en",
"zh",
"dataset:YanqiDai/MMRole_dataset",
"arxiv:2408.04203",
"base_model:Qwen/Qwen-VL-Chat",
"base_model:finetune:Qwen/Qwen-VL-Chat",
"license:mit",
"region:us"
] | null | 2024-08-28T01:25:11Z |
---
license: mit
datasets:
- YanqiDai/MMRole_dataset
language:
- en
- zh
base_model: Qwen/Qwen-VL-Chat
---
The model weights of the reward model in *MMRole*, A Comprehensive Framework for Developing and Evaluating Multimodal Role-Playing Agents.
Please refer to our paper (https://arxiv.org/abs/2408.04203) and code (https://github.com/YanqiDai/MMRole) for more details.
|
salbatarni/arabert_baseline_style_task6_fold1
|
salbatarni
| 2024-08-29T06:29:44Z | 5 | 0 | null |
[
"tensorboard",
"safetensors",
"bert",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"region:us"
] | null | 2024-08-29T06:28:27Z |
---
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: arabert_baseline_style_task6_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# arabert_baseline_style_task6_fold1
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8646
- Qwk: 0.5642
- Mse: 0.8646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| No log | 0.5 | 2 | 1.9949 | 0.0653 | 1.9949 |
| No log | 1.0 | 4 | 0.9870 | 0.4463 | 0.9870 |
| No log | 1.5 | 6 | 1.0110 | 0.3212 | 1.0110 |
| No log | 2.0 | 8 | 1.2493 | 0.1888 | 1.2493 |
| No log | 2.5 | 10 | 1.3543 | 0.2877 | 1.3543 |
| No log | 3.0 | 12 | 0.9301 | 0.3348 | 0.9301 |
| No log | 3.5 | 14 | 0.8618 | 0.4615 | 0.8618 |
| No log | 4.0 | 16 | 0.8638 | 0.3275 | 0.8638 |
| No log | 4.5 | 18 | 0.8955 | 0.4340 | 0.8955 |
| No log | 5.0 | 20 | 0.9073 | 0.5391 | 0.9073 |
| No log | 5.5 | 22 | 0.8973 | 0.5817 | 0.8973 |
| No log | 6.0 | 24 | 0.9007 | 0.5817 | 0.9007 |
| No log | 6.5 | 26 | 0.8920 | 0.5817 | 0.8920 |
| No log | 7.0 | 28 | 0.8869 | 0.5817 | 0.8869 |
| No log | 7.5 | 30 | 0.8758 | 0.5817 | 0.8758 |
| No log | 8.0 | 32 | 0.8811 | 0.5642 | 0.8811 |
| No log | 8.5 | 34 | 0.8874 | 0.4943 | 0.8874 |
| No log | 9.0 | 36 | 0.8749 | 0.5642 | 0.8749 |
| No log | 9.5 | 38 | 0.8676 | 0.5642 | 0.8676 |
| No log | 10.0 | 40 | 0.8646 | 0.5642 | 0.8646 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
HafijulHoquenabid2/T5_flanlarge_phase_1
|
HafijulHoquenabid2
| 2024-08-29T06:26:11Z | 24 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"question-answering",
"base_model:google/flan-t5-large",
"base_model:finetune:google/flan-t5-large",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-08-28T20:34:07Z |
---
license: apache-2.0
base_model: google/flan-t5-large
tags:
- generated_from_trainer
model-index:
- name: T5_flanlarge_phase_1
results: []
pipeline_tag: question-answering
library_name: transformers
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5_flanlarge_phase_1
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4526
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5176 | 0.9987 | 595 | 1.4956 |
| 1.4423 | 1.9992 | 1191 | 1.4624 |
| 1.4197 | 2.9996 | 1787 | 1.4531 |
| 1.2841 | 3.9950 | 2380 | 1.4526 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
DanielTobi0/sabiyarn_custom_finetune
|
DanielTobi0
| 2024-08-29T06:16:59Z | 133 | 0 |
transformers
|
[
"transformers",
"safetensors",
"nanogpt-j",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-08-29T05:55:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
salbatarni/arabert_baseline_style_task1_fold0
|
salbatarni
| 2024-08-29T06:15:50Z | 5 | 0 | null |
[
"tensorboard",
"safetensors",
"bert",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"region:us"
] | null | 2024-08-29T06:14:11Z |
---
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: arabert_baseline_style_task1_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# arabert_baseline_style_task1_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5100
- Qwk: 0.6698
- Mse: 0.5053
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|
| No log | 0.3333 | 2 | 5.4651 | -0.0419 | 5.4241 |
| No log | 0.6667 | 4 | 2.3074 | 0.1693 | 2.2743 |
| No log | 1.0 | 6 | 1.2115 | 0.1561 | 1.2001 |
| No log | 1.3333 | 8 | 0.9959 | 0.4358 | 0.9923 |
| No log | 1.6667 | 10 | 0.7290 | 0.4921 | 0.7317 |
| No log | 2.0 | 12 | 0.7145 | 0.5587 | 0.7193 |
| No log | 2.3333 | 14 | 0.7329 | 0.5435 | 0.7351 |
| No log | 2.6667 | 16 | 0.7694 | 0.5219 | 0.7660 |
| No log | 3.0 | 18 | 0.9233 | 0.4358 | 0.9210 |
| No log | 3.3333 | 20 | 0.8596 | 0.4516 | 0.8545 |
| No log | 3.6667 | 22 | 0.7445 | 0.5743 | 0.7322 |
| No log | 4.0 | 24 | 0.7760 | 0.4773 | 0.7614 |
| No log | 4.3333 | 26 | 0.6783 | 0.5743 | 0.6684 |
| No log | 4.6667 | 28 | 0.7836 | 0.5152 | 0.7834 |
| No log | 5.0 | 30 | 0.7387 | 0.5188 | 0.7401 |
| No log | 5.3333 | 32 | 0.5679 | 0.5743 | 0.5656 |
| No log | 5.6667 | 34 | 0.5183 | 0.5743 | 0.5136 |
| No log | 6.0 | 36 | 0.5055 | 0.5743 | 0.5017 |
| No log | 6.3333 | 38 | 0.5518 | 0.5188 | 0.5511 |
| No log | 6.6667 | 40 | 0.6558 | 0.5188 | 0.6585 |
| No log | 7.0 | 42 | 0.6773 | 0.6025 | 0.6812 |
| No log | 7.3333 | 44 | 0.6216 | 0.6025 | 0.6242 |
| No log | 7.6667 | 46 | 0.5382 | 0.6698 | 0.5375 |
| No log | 8.0 | 48 | 0.5083 | 0.6698 | 0.5056 |
| No log | 8.3333 | 50 | 0.5045 | 0.6698 | 0.5004 |
| No log | 8.6667 | 52 | 0.5059 | 0.7151 | 0.5009 |
| No log | 9.0 | 54 | 0.5070 | 0.7151 | 0.5018 |
| No log | 9.3333 | 56 | 0.5069 | 0.7151 | 0.5016 |
| No log | 9.6667 | 58 | 0.5086 | 0.7151 | 0.5037 |
| No log | 10.0 | 60 | 0.5100 | 0.6698 | 0.5053 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
salbatarni/arabert_baseline_development_task7_fold0
|
salbatarni
| 2024-08-29T06:12:38Z | 5 | 0 | null |
[
"tensorboard",
"safetensors",
"bert",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"region:us"
] | null | 2024-08-29T06:11:07Z |
---
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: arabert_baseline_development_task7_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# arabert_baseline_development_task7_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3255
- Qwk: 0.6
- Mse: 0.3255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| No log | 0.3333 | 2 | 1.1202 | 0.0777 | 1.1202 |
| No log | 0.6667 | 4 | 0.6041 | 0.4521 | 0.6041 |
| No log | 1.0 | 6 | 0.5933 | 0.4048 | 0.5933 |
| No log | 1.3333 | 8 | 0.6033 | 0.4186 | 0.6033 |
| No log | 1.6667 | 10 | 0.4045 | 0.5380 | 0.4045 |
| No log | 2.0 | 12 | 0.4962 | 0.5370 | 0.4962 |
| No log | 2.3333 | 14 | 0.4729 | 0.5380 | 0.4729 |
| No log | 2.6667 | 16 | 0.4672 | 0.4643 | 0.4672 |
| No log | 3.0 | 18 | 0.5466 | 0.4421 | 0.5466 |
| No log | 3.3333 | 20 | 0.6361 | 0.4508 | 0.6361 |
| No log | 3.6667 | 22 | 0.4635 | 0.4421 | 0.4635 |
| No log | 4.0 | 24 | 0.3643 | 0.6 | 0.3643 |
| No log | 4.3333 | 26 | 0.3664 | 0.6237 | 0.3664 |
| No log | 4.6667 | 28 | 0.3535 | 0.6 | 0.3535 |
| No log | 5.0 | 30 | 0.3681 | 0.5545 | 0.3681 |
| No log | 5.3333 | 32 | 0.3906 | 0.5327 | 0.3906 |
| No log | 5.6667 | 34 | 0.3676 | 0.5327 | 0.3676 |
| No log | 6.0 | 36 | 0.3373 | 0.6 | 0.3373 |
| No log | 6.3333 | 38 | 0.3425 | 0.6324 | 0.3425 |
| No log | 6.6667 | 40 | 0.3594 | 0.5960 | 0.3594 |
| No log | 7.0 | 42 | 0.3550 | 0.6 | 0.3550 |
| No log | 7.3333 | 44 | 0.3547 | 0.5874 | 0.3547 |
| No log | 7.6667 | 46 | 0.3761 | 0.5726 | 0.3761 |
| No log | 8.0 | 48 | 0.3915 | 0.5726 | 0.3915 |
| No log | 8.3333 | 50 | 0.3777 | 0.5726 | 0.3777 |
| No log | 8.6667 | 52 | 0.3558 | 0.5642 | 0.3558 |
| No log | 9.0 | 54 | 0.3383 | 0.5874 | 0.3383 |
| No log | 9.3333 | 56 | 0.3286 | 0.6 | 0.3286 |
| No log | 9.6667 | 58 | 0.3263 | 0.6 | 0.3263 |
| No log | 10.0 | 60 | 0.3255 | 0.6 | 0.3255 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
salbatarni/arabert_baseline_development_task6_fold0
|
salbatarni
| 2024-08-29T06:09:51Z | 5 | 0 | null |
[
"tensorboard",
"safetensors",
"bert",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"region:us"
] | null | 2024-08-29T06:08:38Z |
---
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: arabert_baseline_development_task6_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# arabert_baseline_development_task6_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7890
- Qwk: 0.4503
- Mse: 0.7890
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| No log | 0.5 | 2 | 1.5011 | 0.2195 | 1.5011 |
| No log | 1.0 | 4 | 1.0235 | 0.3558 | 1.0235 |
| No log | 1.5 | 6 | 1.1605 | 0.3913 | 1.1605 |
| No log | 2.0 | 8 | 1.0269 | 0.3558 | 1.0269 |
| No log | 2.5 | 10 | 0.9070 | 0.3558 | 0.9070 |
| No log | 3.0 | 12 | 0.8423 | 0.3913 | 0.8423 |
| No log | 3.5 | 14 | 0.7993 | 0.3558 | 0.7993 |
| No log | 4.0 | 16 | 0.7964 | 0.4740 | 0.7964 |
| No log | 4.5 | 18 | 0.8073 | 0.4740 | 0.8073 |
| No log | 5.0 | 20 | 0.8740 | 0.6091 | 0.8740 |
| No log | 5.5 | 22 | 0.8159 | 0.6056 | 0.8159 |
| No log | 6.0 | 24 | 0.8260 | 0.6056 | 0.8260 |
| No log | 6.5 | 26 | 0.8604 | 0.6056 | 0.8604 |
| No log | 7.0 | 28 | 0.8853 | 0.6056 | 0.8853 |
| No log | 7.5 | 30 | 0.8139 | 0.5172 | 0.8139 |
| No log | 8.0 | 32 | 0.7491 | 0.4503 | 0.7491 |
| No log | 8.5 | 34 | 0.7564 | 0.4503 | 0.7564 |
| No log | 9.0 | 36 | 0.7842 | 0.4503 | 0.7842 |
| No log | 9.5 | 38 | 0.7831 | 0.4503 | 0.7831 |
| No log | 10.0 | 40 | 0.7890 | 0.4503 | 0.7890 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
salbatarni/arabert_baseline_organization_task7_fold1
|
salbatarni
| 2024-08-29T05:55:18Z | 5 | 0 | null |
[
"tensorboard",
"safetensors",
"bert",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"region:us"
] | null | 2024-08-29T05:53:50Z |
---
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: arabert_baseline_organization_task7_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# arabert_baseline_organization_task7_fold1
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6217
- Qwk: 0.4969
- Mse: 0.6213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| No log | 0.3333 | 2 | 1.0245 | 0.3758 | 1.0297 |
| No log | 0.6667 | 4 | 0.7919 | 0.5312 | 0.8059 |
| No log | 1.0 | 6 | 0.8469 | 0.6400 | 0.8626 |
| No log | 1.3333 | 8 | 0.7621 | 0.3793 | 0.7785 |
| No log | 1.6667 | 10 | 0.8831 | 0.2326 | 0.8970 |
| No log | 2.0 | 12 | 0.7493 | 0.4 | 0.7628 |
| No log | 2.3333 | 14 | 0.4924 | 0.6165 | 0.5052 |
| No log | 2.6667 | 16 | 0.4499 | 0.6571 | 0.4612 |
| No log | 3.0 | 18 | 0.4936 | 0.6203 | 0.5029 |
| No log | 3.3333 | 20 | 0.7252 | 0.4224 | 0.7311 |
| No log | 3.6667 | 22 | 0.9227 | 0.3657 | 0.9263 |
| No log | 4.0 | 24 | 0.8957 | 0.3657 | 0.8980 |
| No log | 4.3333 | 26 | 0.7242 | 0.4211 | 0.7260 |
| No log | 4.6667 | 28 | 0.5595 | 0.6203 | 0.5620 |
| No log | 5.0 | 30 | 0.4521 | 0.6024 | 0.4559 |
| No log | 5.3333 | 32 | 0.4402 | 0.6786 | 0.4437 |
| No log | 5.6667 | 34 | 0.4703 | 0.6909 | 0.4728 |
| No log | 6.0 | 36 | 0.6144 | 0.5 | 0.6153 |
| No log | 6.3333 | 38 | 0.7150 | 0.4969 | 0.7150 |
| No log | 6.6667 | 40 | 0.7022 | 0.4969 | 0.7024 |
| No log | 7.0 | 42 | 0.6409 | 0.5 | 0.6418 |
| No log | 7.3333 | 44 | 0.5912 | 0.5 | 0.5923 |
| No log | 7.6667 | 46 | 0.5457 | 0.6341 | 0.5469 |
| No log | 8.0 | 48 | 0.5073 | 0.6909 | 0.5089 |
| No log | 8.3333 | 50 | 0.5143 | 0.6909 | 0.5157 |
| No log | 8.6667 | 52 | 0.5540 | 0.6341 | 0.5549 |
| No log | 9.0 | 54 | 0.5883 | 0.6272 | 0.5886 |
| No log | 9.3333 | 56 | 0.6103 | 0.4969 | 0.6102 |
| No log | 9.6667 | 58 | 0.6204 | 0.4969 | 0.6200 |
| No log | 10.0 | 60 | 0.6217 | 0.4969 | 0.6213 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Jaume/gte-multilingual-base-no-network
|
Jaume
| 2024-08-29T05:55:01Z | 19 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"new",
"feature-extraction",
"transformers",
"multilingual",
"sentence-similarity",
"custom_code",
"af",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"cs",
"cy",
"da",
"de",
"el",
"en",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"gl",
"gu",
"he",
"hi",
"hr",
"ht",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ky",
"lo",
"lt",
"lv",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"pa",
"pl",
"pt",
"qu",
"ro",
"ru",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"uk",
"ur",
"vi",
"yo",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-08-29T05:52:29Z |
---
tags:
- sentence-transformers
- transformers
- multilingual
- sentence-similarity
license: apache-2.0
language:
- af
- ar
- az
- be
- bg
- bn
- ca
- ceb
- cs
- cy
- da
- de
- el
- en
- es
- et
- eu
- fa
- fi
- fr
- gl
- gu
- he
- hi
- hr
- ht
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ky
- lo
- lt
- lv
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- pa
- pl
- pt
- qu
- ro
- ru
- si
- sk
- sl
- so
- sq
- sr
- sv
- sw
- ta
- te
- th
- tl
- tr
- uk
- ur
- vi
- yo
- zh
---
## gte-multilingual-base (no network)
This is a mirror model for [`Alibaba-NLP/gte-multilingual-base`](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) in which all code is included in the same repository so no external network connection is needed when loading the model from local.
|
salbatarni/arabert_baseline_organization_task3_fold1
|
salbatarni
| 2024-08-29T05:45:22Z | 5 | 0 | null |
[
"tensorboard",
"safetensors",
"bert",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"region:us"
] | null | 2024-08-29T05:44:20Z |
---
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: arabert_baseline_organization_task3_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# arabert_baseline_organization_task3_fold1
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5660
- Qwk: 0.0120
- Mse: 0.6114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|
| No log | 0.6667 | 2 | 3.6631 | 0.0494 | 3.6515 |
| No log | 1.3333 | 4 | 1.3687 | -0.0302 | 1.4053 |
| No log | 2.0 | 6 | 1.0057 | -0.0097 | 1.0728 |
| No log | 2.6667 | 8 | 0.5822 | 0.0403 | 0.6339 |
| No log | 3.3333 | 10 | 0.6456 | 0.0120 | 0.7020 |
| No log | 4.0 | 12 | 0.7331 | 0.0 | 0.7912 |
| No log | 4.6667 | 14 | 0.7761 | 0.0 | 0.8325 |
| No log | 5.3333 | 16 | 0.6870 | 0.0 | 0.7416 |
| No log | 6.0 | 18 | 0.6157 | 0.0 | 0.6661 |
| No log | 6.6667 | 20 | 0.5690 | 0.0120 | 0.6176 |
| No log | 7.3333 | 22 | 0.5419 | 0.1646 | 0.5867 |
| No log | 8.0 | 24 | 0.5390 | 0.1646 | 0.5830 |
| No log | 8.6667 | 26 | 0.5477 | 0.0120 | 0.5917 |
| No log | 9.3333 | 28 | 0.5587 | 0.0120 | 0.6034 |
| No log | 10.0 | 30 | 0.5660 | 0.0120 | 0.6114 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
yixuan-chia/snowflake-arctic-embed-m-v1.5-Q8_0-GGUF
|
yixuan-chia
| 2024-08-29T05:39:56Z | 13 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"mteb",
"arctic",
"snowflake-arctic-embed",
"transformers.js",
"llama-cpp",
"gguf-my-repo",
"base_model:Snowflake/snowflake-arctic-embed-m-v1.5",
"base_model:quantized:Snowflake/snowflake-arctic-embed-m-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-08-29T05:39:52Z |
---
base_model: Snowflake/snowflake-arctic-embed-m-v1.5
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- arctic
- snowflake-arctic-embed
- transformers.js
- llama-cpp
- gguf-my-repo
model-index:
- name: snowflake-arctic-embed-m-v1.5
results:
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: main_score
value: 59.53000000000001
- type: map_at_1
value: 34.282000000000004
- type: map_at_10
value: 50.613
- type: map_at_100
value: 51.269
- type: map_at_1000
value: 51.271
- type: map_at_20
value: 51.158
- type: map_at_3
value: 45.626
- type: map_at_5
value: 48.638
- type: mrr_at_1
value: 34.92176386913229
- type: mrr_at_10
value: 50.856081645555406
- type: mrr_at_100
value: 51.510739437069034
- type: mrr_at_1000
value: 51.51299498830165
- type: mrr_at_20
value: 51.39987941081724
- type: mrr_at_3
value: 45.993361782835514
- type: mrr_at_5
value: 48.88098624940742
- type: nauc_map_at_1000_diff1
value: 10.628675774160785
- type: nauc_map_at_1000_max
value: -10.11742589992339
- type: nauc_map_at_1000_std
value: -18.29277379812427
- type: nauc_map_at_100_diff1
value: 10.63250240035489
- type: nauc_map_at_100_max
value: -10.112078786734363
- type: nauc_map_at_100_std
value: -18.288524872706834
- type: nauc_map_at_10_diff1
value: 10.476494913081712
- type: nauc_map_at_10_max
value: -9.890937746734037
- type: nauc_map_at_10_std
value: -18.279750514750443
- type: nauc_map_at_1_diff1
value: 14.549204048461151
- type: nauc_map_at_1_max
value: -12.230560087701225
- type: nauc_map_at_1_std
value: -19.469903650130362
- type: nauc_map_at_20_diff1
value: 10.586564571825674
- type: nauc_map_at_20_max
value: -10.00292720526217
- type: nauc_map_at_20_std
value: -18.258077347878064
- type: nauc_map_at_3_diff1
value: 10.378663968090372
- type: nauc_map_at_3_max
value: -10.458896171786185
- type: nauc_map_at_3_std
value: -18.38852760333766
- type: nauc_map_at_5_diff1
value: 10.235960275925581
- type: nauc_map_at_5_max
value: -10.239496080409058
- type: nauc_map_at_5_std
value: -18.817023479445886
- type: nauc_mrr_at_1000_diff1
value: 8.718212649575722
- type: nauc_mrr_at_1000_max
value: -10.81022794038691
- type: nauc_mrr_at_1000_std
value: -17.87669499555167
- type: nauc_mrr_at_100_diff1
value: 8.722174171165133
- type: nauc_mrr_at_100_max
value: -10.804840985713525
- type: nauc_mrr_at_100_std
value: -17.872487099359986
- type: nauc_mrr_at_10_diff1
value: 8.609421635870238
- type: nauc_mrr_at_10_max
value: -10.568644717548432
- type: nauc_mrr_at_10_std
value: -17.872968762635814
- type: nauc_mrr_at_1_diff1
value: 12.69590006263834
- type: nauc_mrr_at_1_max
value: -12.082056561238321
- type: nauc_mrr_at_1_std
value: -18.036424092186657
- type: nauc_mrr_at_20_diff1
value: 8.684842497970315
- type: nauc_mrr_at_20_max
value: -10.691578914627286
- type: nauc_mrr_at_20_std
value: -17.84350301434992
- type: nauc_mrr_at_3_diff1
value: 8.649761557556763
- type: nauc_mrr_at_3_max
value: -11.104694428047496
- type: nauc_mrr_at_3_std
value: -18.149917948370344
- type: nauc_mrr_at_5_diff1
value: 8.433489750038396
- type: nauc_mrr_at_5_max
value: -10.917772454397436
- type: nauc_mrr_at_5_std
value: -18.4094211134111
- type: nauc_ndcg_at_1000_diff1
value: 10.19041067807956
- type: nauc_ndcg_at_1000_max
value: -9.54328201605796
- type: nauc_ndcg_at_1000_std
value: -17.824620427456633
- type: nauc_ndcg_at_100_diff1
value: 10.289491087585963
- type: nauc_ndcg_at_100_max
value: -9.357214331420337
- type: nauc_ndcg_at_100_std
value: -17.657600653632873
- type: nauc_ndcg_at_10_diff1
value: 9.435530877596092
- type: nauc_ndcg_at_10_max
value: -8.182581635383546
- type: nauc_ndcg_at_10_std
value: -17.603156479980388
- type: nauc_ndcg_at_1_diff1
value: 14.549204048461151
- type: nauc_ndcg_at_1_max
value: -12.230560087701225
- type: nauc_ndcg_at_1_std
value: -19.469903650130362
- type: nauc_ndcg_at_20_diff1
value: 9.885227087275197
- type: nauc_ndcg_at_20_max
value: -8.52362662391439
- type: nauc_ndcg_at_20_std
value: -17.441705436231764
- type: nauc_ndcg_at_3_diff1
value: 9.22542769998547
- type: nauc_ndcg_at_3_max
value: -9.903590564219288
- type: nauc_ndcg_at_3_std
value: -18.357220221111593
- type: nauc_ndcg_at_5_diff1
value: 8.8756720745828
- type: nauc_ndcg_at_5_max
value: -9.269764943861245
- type: nauc_ndcg_at_5_std
value: -19.009229433187784
- type: nauc_precision_at_1000_diff1
value: 3.733355117431035
- type: nauc_precision_at_1000_max
value: 3.9603571352517393
- type: nauc_precision_at_1000_std
value: 70.07345061131439
- type: nauc_precision_at_100_diff1
value: 29.019032142462457
- type: nauc_precision_at_100_max
value: 40.75153328286103
- type: nauc_precision_at_100_std
value: 62.634249549126594
- type: nauc_precision_at_10_diff1
value: 2.5762677254910353
- type: nauc_precision_at_10_max
value: 6.096298633773051
- type: nauc_precision_at_10_std
value: -11.507400451348587
- type: nauc_precision_at_1_diff1
value: 14.549204048461151
- type: nauc_precision_at_1_max
value: -12.230560087701225
- type: nauc_precision_at_1_std
value: -19.469903650130362
- type: nauc_precision_at_20_diff1
value: 1.715540124567996
- type: nauc_precision_at_20_max
value: 21.53546453945913
- type: nauc_precision_at_20_std
value: 1.537961142195571
- type: nauc_precision_at_3_diff1
value: 5.701850652555737
- type: nauc_precision_at_3_max
value: -8.180345365085552
- type: nauc_precision_at_3_std
value: -18.37033750502482
- type: nauc_precision_at_5_diff1
value: 3.6053552181042843
- type: nauc_precision_at_5_max
value: -5.207647070615612
- type: nauc_precision_at_5_std
value: -19.89491085427258
- type: nauc_recall_at_1000_diff1
value: 3.733355117431255
- type: nauc_recall_at_1000_max
value: 3.9603571352482194
- type: nauc_recall_at_1000_std
value: 70.07345061131205
- type: nauc_recall_at_100_diff1
value: 29.01903214246288
- type: nauc_recall_at_100_max
value: 40.7515332828621
- type: nauc_recall_at_100_std
value: 62.63424954912607
- type: nauc_recall_at_10_diff1
value: 2.5762677254911988
- type: nauc_recall_at_10_max
value: 6.0962986337729905
- type: nauc_recall_at_10_std
value: -11.507400451348577
- type: nauc_recall_at_1_diff1
value: 14.549204048461151
- type: nauc_recall_at_1_max
value: -12.230560087701225
- type: nauc_recall_at_1_std
value: -19.469903650130362
- type: nauc_recall_at_20_diff1
value: 1.7155401245682675
- type: nauc_recall_at_20_max
value: 21.535464539459632
- type: nauc_recall_at_20_std
value: 1.5379611421957025
- type: nauc_recall_at_3_diff1
value: 5.7018506525557875
- type: nauc_recall_at_3_max
value: -8.180345365085538
- type: nauc_recall_at_3_std
value: -18.370337505024796
- type: nauc_recall_at_5_diff1
value: 3.6053552181043913
- type: nauc_recall_at_5_max
value: -5.207647070615579
- type: nauc_recall_at_5_std
value: -19.894910854272492
- type: ndcg_at_1
value: 34.282000000000004
- type: ndcg_at_10
value: 59.53000000000001
- type: ndcg_at_100
value: 62.187000000000005
- type: ndcg_at_1000
value: 62.243
- type: ndcg_at_20
value: 61.451
- type: ndcg_at_3
value: 49.393
- type: ndcg_at_5
value: 54.771
- type: precision_at_1
value: 34.282000000000004
- type: precision_at_10
value: 8.791
- type: precision_at_100
value: 0.992
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.769
- type: precision_at_3
value: 20.104
- type: precision_at_5
value: 14.651
- type: recall_at_1
value: 34.282000000000004
- type: recall_at_10
value: 87.909
- type: recall_at_100
value: 99.21799999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_20
value: 95.377
- type: recall_at_3
value: 60.313
- type: recall_at_5
value: 73.257
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: mteb/cqadupstack-android
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: main_score
value: 53.885000000000005
- type: map_at_1
value: 35.429
- type: map_at_10
value: 47.469
- type: map_at_100
value: 48.997
- type: map_at_1000
value: 49.117
- type: map_at_20
value: 48.324
- type: map_at_3
value: 43.835
- type: map_at_5
value: 46.043
- type: mrr_at_1
value: 43.34763948497854
- type: mrr_at_10
value: 53.258623430297234
- type: mrr_at_100
value: 53.99123884299005
- type: mrr_at_1000
value: 54.02458101713216
- type: mrr_at_20
value: 53.695964669618945
- type: mrr_at_3
value: 50.81068192656173
- type: mrr_at_5
value: 52.45588936576058
- type: nauc_map_at_1000_diff1
value: 51.55382824218782
- type: nauc_map_at_1000_max
value: 31.855350695084606
- type: nauc_map_at_1000_std
value: -5.465862008150992
- type: nauc_map_at_100_diff1
value: 51.55889312452534
- type: nauc_map_at_100_max
value: 31.88429637207401
- type: nauc_map_at_100_std
value: -5.40805152544196
- type: nauc_map_at_10_diff1
value: 51.6592677505875
- type: nauc_map_at_10_max
value: 31.554425233617543
- type: nauc_map_at_10_std
value: -6.125756131339046
- type: nauc_map_at_1_diff1
value: 55.6889617582672
- type: nauc_map_at_1_max
value: 27.821166966868176
- type: nauc_map_at_1_std
value: -5.778838498211728
- type: nauc_map_at_20_diff1
value: 51.70520970992564
- type: nauc_map_at_20_max
value: 31.811676633900465
- type: nauc_map_at_20_std
value: -5.463596751904718
- type: nauc_map_at_3_diff1
value: 53.206169626589606
- type: nauc_map_at_3_max
value: 31.64373830824983
- type: nauc_map_at_3_std
value: -6.054761451312827
- type: nauc_map_at_5_diff1
value: 52.37308971673694
- type: nauc_map_at_5_max
value: 31.974302019633644
- type: nauc_map_at_5_std
value: -6.302653399940531
- type: nauc_mrr_at_1000_diff1
value: 49.345152231490616
- type: nauc_mrr_at_1000_max
value: 33.49789501712511
- type: nauc_mrr_at_1000_std
value: -6.054730861163538
- type: nauc_mrr_at_100_diff1
value: 49.3387577601307
- type: nauc_mrr_at_100_max
value: 33.48149992464187
- type: nauc_mrr_at_100_std
value: -6.061177137579308
- type: nauc_mrr_at_10_diff1
value: 49.08312288449718
- type: nauc_mrr_at_10_max
value: 33.470393322577465
- type: nauc_mrr_at_10_std
value: -6.180286430216975
- type: nauc_mrr_at_1_diff1
value: 52.43364978537192
- type: nauc_mrr_at_1_max
value: 31.521755633355713
- type: nauc_mrr_at_1_std
value: -7.002499524130836
- type: nauc_mrr_at_20_diff1
value: 49.311059224991766
- type: nauc_mrr_at_20_max
value: 33.538523037692144
- type: nauc_mrr_at_20_std
value: -6.034619474981136
- type: nauc_mrr_at_3_diff1
value: 49.90489868439366
- type: nauc_mrr_at_3_max
value: 34.400493912164606
- type: nauc_mrr_at_3_std
value: -6.028875320994629
- type: nauc_mrr_at_5_diff1
value: 49.033661898983475
- type: nauc_mrr_at_5_max
value: 33.732315350193936
- type: nauc_mrr_at_5_std
value: -6.272548556330368
- type: nauc_ndcg_at_1000_diff1
value: 49.81681892539247
- type: nauc_ndcg_at_1000_max
value: 33.06518006062093
- type: nauc_ndcg_at_1000_std
value: -4.282105713014755
- type: nauc_ndcg_at_100_diff1
value: 49.42362108857786
- type: nauc_ndcg_at_100_max
value: 32.92024325540483
- type: nauc_ndcg_at_100_std
value: -3.7786765305496717
- type: nauc_ndcg_at_10_diff1
value: 48.83102435475594
- type: nauc_ndcg_at_10_max
value: 31.898404563611958
- type: nauc_ndcg_at_10_std
value: -6.2024003866707
- type: nauc_ndcg_at_1_diff1
value: 52.43364978537192
- type: nauc_ndcg_at_1_max
value: 31.521755633355713
- type: nauc_ndcg_at_1_std
value: -7.002499524130836
- type: nauc_ndcg_at_20_diff1
value: 49.466526454438316
- type: nauc_ndcg_at_20_max
value: 32.424462698701674
- type: nauc_ndcg_at_20_std
value: -4.520809563712905
- type: nauc_ndcg_at_3_diff1
value: 50.997884562583884
- type: nauc_ndcg_at_3_max
value: 33.26787046916917
- type: nauc_ndcg_at_3_std
value: -6.340699471083753
- type: nauc_ndcg_at_5_diff1
value: 49.68314458398097
- type: nauc_ndcg_at_5_max
value: 32.80910071143984
- type: nauc_ndcg_at_5_std
value: -6.734495576445887
- type: nauc_precision_at_1000_diff1
value: -24.18940012795299
- type: nauc_precision_at_1000_max
value: -10.995343674356896
- type: nauc_precision_at_1000_std
value: -8.298841004724856
- type: nauc_precision_at_100_diff1
value: -18.104939577865935
- type: nauc_precision_at_100_max
value: -1.3757613100627637
- type: nauc_precision_at_100_std
value: 0.07661922190466432
- type: nauc_precision_at_10_diff1
value: 3.9624459059275967
- type: nauc_precision_at_10_max
value: 14.841561593450391
- type: nauc_precision_at_10_std
value: -2.485374333613117
- type: nauc_precision_at_1_diff1
value: 52.43364978537192
- type: nauc_precision_at_1_max
value: 31.521755633355713
- type: nauc_precision_at_1_std
value: -7.002499524130836
- type: nauc_precision_at_20_diff1
value: -4.4791763436505265
- type: nauc_precision_at_20_max
value: 9.157872836996276
- type: nauc_precision_at_20_std
value: 2.086903518342088
- type: nauc_precision_at_3_diff1
value: 28.480888018235568
- type: nauc_precision_at_3_max
value: 30.34526267718485
- type: nauc_precision_at_3_std
value: -6.3006706923866025
- type: nauc_precision_at_5_diff1
value: 16.488039195453517
- type: nauc_precision_at_5_max
value: 24.593477099241852
- type: nauc_precision_at_5_std
value: -5.316448107840636
- type: nauc_recall_at_1000_diff1
value: 34.715187316533076
- type: nauc_recall_at_1000_max
value: 58.2266544684947
- type: nauc_recall_at_1000_std
value: 63.85237636398278
- type: nauc_recall_at_100_diff1
value: 36.08623826028132
- type: nauc_recall_at_100_max
value: 33.05011429439473
- type: nauc_recall_at_100_std
value: 16.559545021212564
- type: nauc_recall_at_10_diff1
value: 39.76738610714205
- type: nauc_recall_at_10_max
value: 28.233045706945997
- type: nauc_recall_at_10_std
value: -5.13243784043598
- type: nauc_recall_at_1_diff1
value: 55.6889617582672
- type: nauc_recall_at_1_max
value: 27.821166966868176
- type: nauc_recall_at_1_std
value: -5.778838498211728
- type: nauc_recall_at_20_diff1
value: 41.18682480073759
- type: nauc_recall_at_20_max
value: 29.525993239296945
- type: nauc_recall_at_20_std
value: 1.5003598438954298
- type: nauc_recall_at_3_diff1
value: 48.31879460301157
- type: nauc_recall_at_3_max
value: 32.93751306970167
- type: nauc_recall_at_3_std
value: -5.28070084211707
- type: nauc_recall_at_5_diff1
value: 44.327686388315435
- type: nauc_recall_at_5_max
value: 32.04823486234599
- type: nauc_recall_at_5_std
value: -6.4221525602778256
- type: ndcg_at_1
value: 43.348
- type: ndcg_at_10
value: 53.885000000000005
- type: ndcg_at_100
value: 59.204
- type: ndcg_at_1000
value: 60.744
- type: ndcg_at_20
value: 55.995
- type: ndcg_at_3
value: 49.112
- type: ndcg_at_5
value: 51.61900000000001
- type: precision_at_1
value: 43.348
- type: precision_at_10
value: 10.242999999999999
- type: precision_at_100
value: 1.6150000000000002
- type: precision_at_1000
value: 0.203
- type: precision_at_20
value: 6.066
- type: precision_at_3
value: 23.605
- type: precision_at_5
value: 17.024
- type: recall_at_1
value: 35.429
- type: recall_at_10
value: 65.77199999999999
- type: recall_at_100
value: 87.89
- type: recall_at_1000
value: 97.13000000000001
- type: recall_at_20
value: 73.299
- type: recall_at_3
value: 52.034000000000006
- type: recall_at_5
value: 58.96
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: mteb/cqadupstack-english
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: main_score
value: 49.55
- type: map_at_1
value: 31.684
- type: map_at_10
value: 43.258
- type: map_at_100
value: 44.628
- type: map_at_1000
value: 44.761
- type: map_at_20
value: 44.015
- type: map_at_3
value: 39.778000000000006
- type: map_at_5
value: 41.643
- type: mrr_at_1
value: 39.87261146496815
- type: mrr_at_10
value: 49.31978566373469
- type: mrr_at_100
value: 49.94922739445482
- type: mrr_at_1000
value: 49.990325601254106
- type: mrr_at_20
value: 49.70597468576704
- type: mrr_at_3
value: 47.070063694267546
- type: mrr_at_5
value: 48.23248407643316
- type: nauc_map_at_1000_diff1
value: 53.44044712371752
- type: nauc_map_at_1000_max
value: 34.5651440062204
- type: nauc_map_at_1000_std
value: -0.9814384609230475
- type: nauc_map_at_100_diff1
value: 53.429004435388464
- type: nauc_map_at_100_max
value: 34.52038957273436
- type: nauc_map_at_100_std
value: -1.1021936362699805
- type: nauc_map_at_10_diff1
value: 53.879128574022005
- type: nauc_map_at_10_max
value: 33.74771524140917
- type: nauc_map_at_10_std
value: -2.945132777205236
- type: nauc_map_at_1_diff1
value: 60.25159799695403
- type: nauc_map_at_1_max
value: 26.843892985235808
- type: nauc_map_at_1_std
value: -9.618702739509093
- type: nauc_map_at_20_diff1
value: 53.56789898225283
- type: nauc_map_at_20_max
value: 34.11628845872402
- type: nauc_map_at_20_std
value: -2.024376635870884
- type: nauc_map_at_3_diff1
value: 54.45882099014072
- type: nauc_map_at_3_max
value: 31.29495446507793
- type: nauc_map_at_3_std
value: -6.391948228781555
- type: nauc_map_at_5_diff1
value: 54.20536489050697
- type: nauc_map_at_5_max
value: 32.31001487256826
- type: nauc_map_at_5_std
value: -5.050953263346934
- type: nauc_mrr_at_1000_diff1
value: 50.835858995999125
- type: nauc_mrr_at_1000_max
value: 38.20717381701079
- type: nauc_mrr_at_1000_std
value: 4.174163368228787
- type: nauc_mrr_at_100_diff1
value: 50.827072441041224
- type: nauc_mrr_at_100_max
value: 38.21077622034756
- type: nauc_mrr_at_100_std
value: 4.1951082737013365
- type: nauc_mrr_at_10_diff1
value: 50.90578491570948
- type: nauc_mrr_at_10_max
value: 38.19229691746408
- type: nauc_mrr_at_10_std
value: 3.8290750066335546
- type: nauc_mrr_at_1_diff1
value: 54.807021746871186
- type: nauc_mrr_at_1_max
value: 37.09225642043841
- type: nauc_mrr_at_1_std
value: 0.5654547513131355
- type: nauc_mrr_at_20_diff1
value: 50.86247832095378
- type: nauc_mrr_at_20_max
value: 38.19277867384178
- type: nauc_mrr_at_20_std
value: 4.098932316791841
- type: nauc_mrr_at_3_diff1
value: 50.788934370903036
- type: nauc_mrr_at_3_max
value: 37.72130561895659
- type: nauc_mrr_at_3_std
value: 2.7339370381517583
- type: nauc_mrr_at_5_diff1
value: 50.72543792525547
- type: nauc_mrr_at_5_max
value: 37.57740908475375
- type: nauc_mrr_at_5_std
value: 2.742881431085094
- type: nauc_ndcg_at_1000_diff1
value: 50.89692885407576
- type: nauc_ndcg_at_1000_max
value: 37.250583054716955
- type: nauc_ndcg_at_1000_std
value: 5.552279826578831
- type: nauc_ndcg_at_100_diff1
value: 50.624606875496944
- type: nauc_ndcg_at_100_max
value: 37.1024514234627
- type: nauc_ndcg_at_100_std
value: 5.495892760032762
- type: nauc_ndcg_at_10_diff1
value: 51.910387255793445
- type: nauc_ndcg_at_10_max
value: 36.71168418905039
- type: nauc_ndcg_at_10_std
value: 2.3064115117905217
- type: nauc_ndcg_at_1_diff1
value: 54.807021746871186
- type: nauc_ndcg_at_1_max
value: 37.09225642043841
- type: nauc_ndcg_at_1_std
value: 0.5654547513131355
- type: nauc_ndcg_at_20_diff1
value: 51.43416588546778
- type: nauc_ndcg_at_20_max
value: 36.76387180172346
- type: nauc_ndcg_at_20_std
value: 3.7012798827049718
- type: nauc_ndcg_at_3_diff1
value: 50.91198494475423
- type: nauc_ndcg_at_3_max
value: 34.92770670756687
- type: nauc_ndcg_at_3_std
value: -0.9071486759887368
- type: nauc_ndcg_at_5_diff1
value: 51.63559468683886
- type: nauc_ndcg_at_5_max
value: 34.86849679864564
- type: nauc_ndcg_at_5_std
value: -0.734837221224976
- type: nauc_precision_at_1000_diff1
value: -13.43645457127175
- type: nauc_precision_at_1000_max
value: 12.71162105198664
- type: nauc_precision_at_1000_std
value: 33.175399007040255
- type: nauc_precision_at_100_diff1
value: -8.549834785105412
- type: nauc_precision_at_100_max
value: 22.47383497331883
- type: nauc_precision_at_100_std
value: 39.09108761430844
- type: nauc_precision_at_10_diff1
value: 7.556572451100043
- type: nauc_precision_at_10_max
value: 35.35285122987575
- type: nauc_precision_at_10_std
value: 29.417466305615967
- type: nauc_precision_at_1_diff1
value: 54.807021746871186
- type: nauc_precision_at_1_max
value: 37.09225642043841
- type: nauc_precision_at_1_std
value: 0.5654547513131355
- type: nauc_precision_at_20_diff1
value: -0.550158641635712
- type: nauc_precision_at_20_max
value: 29.9068430006187
- type: nauc_precision_at_20_std
value: 33.920603132821185
- type: nauc_precision_at_3_diff1
value: 25.551264664276687
- type: nauc_precision_at_3_max
value: 37.59463225854679
- type: nauc_precision_at_3_std
value: 13.707295021359043
- type: nauc_precision_at_5_diff1
value: 17.76136129817151
- type: nauc_precision_at_5_max
value: 35.85363807255972
- type: nauc_precision_at_5_std
value: 19.48470876841111
- type: nauc_recall_at_1000_diff1
value: 37.1593620123866
- type: nauc_recall_at_1000_max
value: 46.29322536951135
- type: nauc_recall_at_1000_std
value: 51.47312657083967
- type: nauc_recall_at_100_diff1
value: 37.7542224949536
- type: nauc_recall_at_100_max
value: 38.84120637703135
- type: nauc_recall_at_100_std
value: 28.839672572221925
- type: nauc_recall_at_10_diff1
value: 46.24130302658384
- type: nauc_recall_at_10_max
value: 35.89001724712849
- type: nauc_recall_at_10_std
value: 6.985137790828618
- type: nauc_recall_at_1_diff1
value: 60.25159799695403
- type: nauc_recall_at_1_max
value: 26.843892985235808
- type: nauc_recall_at_1_std
value: -9.618702739509093
- type: nauc_recall_at_20_diff1
value: 43.63576680886187
- type: nauc_recall_at_20_max
value: 36.79079644708101
- type: nauc_recall_at_20_std
value: 13.81561928605839
- type: nauc_recall_at_3_diff1
value: 48.2299322140522
- type: nauc_recall_at_3_max
value: 30.038088484376203
- type: nauc_recall_at_3_std
value: -4.871116183843762
- type: nauc_recall_at_5_diff1
value: 47.22331872695983
- type: nauc_recall_at_5_max
value: 30.398541477173136
- type: nauc_recall_at_5_std
value: -3.2038541888528957
- type: ndcg_at_1
value: 39.873
- type: ndcg_at_10
value: 49.55
- type: ndcg_at_100
value: 53.809
- type: ndcg_at_1000
value: 55.767999999999994
- type: ndcg_at_20
value: 51.275999999999996
- type: ndcg_at_3
value: 44.91
- type: ndcg_at_5
value: 46.855999999999995
- type: precision_at_1
value: 39.873
- type: precision_at_10
value: 9.65
- type: precision_at_100
value: 1.522
- type: precision_at_1000
value: 0.196
- type: precision_at_20
value: 5.701
- type: precision_at_3
value: 22.166
- type: precision_at_5
value: 15.643
- type: recall_at_1
value: 31.684
- type: recall_at_10
value: 60.69
- type: recall_at_100
value: 78.521
- type: recall_at_1000
value: 91.02900000000001
- type: recall_at_20
value: 66.973
- type: recall_at_3
value: 46.807
- type: recall_at_5
value: 52.402
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: mteb/cqadupstack-gaming
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: main_score
value: 62.686
- type: map_at_1
value: 43.856
- type: map_at_10
value: 57.056
- type: map_at_100
value: 58.048
- type: map_at_1000
value: 58.092
- type: map_at_20
value: 57.684000000000005
- type: map_at_3
value: 53.958
- type: map_at_5
value: 55.80500000000001
- type: mrr_at_1
value: 50.03134796238244
- type: mrr_at_10
value: 60.31022043091019
- type: mrr_at_100
value: 60.91892338857461
- type: mrr_at_1000
value: 60.93770463536649
- type: mrr_at_20
value: 60.705642387392736
- type: mrr_at_3
value: 58.286311389759746
- type: mrr_at_5
value: 59.49320794148393
- type: nauc_map_at_1000_diff1
value: 54.849140197256695
- type: nauc_map_at_1000_max
value: 38.978448968260224
- type: nauc_map_at_1000_std
value: 0.4955439383268162
- type: nauc_map_at_100_diff1
value: 54.824334747823364
- type: nauc_map_at_100_max
value: 38.959443109450994
- type: nauc_map_at_100_std
value: 0.49626092018886037
- type: nauc_map_at_10_diff1
value: 54.778189277103394
- type: nauc_map_at_10_max
value: 38.20972191654546
- type: nauc_map_at_10_std
value: -0.7239823837455759
- type: nauc_map_at_1_diff1
value: 58.74017164752485
- type: nauc_map_at_1_max
value: 31.528974862589585
- type: nauc_map_at_1_std
value: -3.273824691929492
- type: nauc_map_at_20_diff1
value: 54.78943693416187
- type: nauc_map_at_20_max
value: 38.77930316443076
- type: nauc_map_at_20_std
value: 0.25607460088355544
- type: nauc_map_at_3_diff1
value: 55.68313410225767
- type: nauc_map_at_3_max
value: 36.22847284104399
- type: nauc_map_at_3_std
value: -3.010979639100503
- type: nauc_map_at_5_diff1
value: 55.11385094420661
- type: nauc_map_at_5_max
value: 37.319681045490924
- type: nauc_map_at_5_std
value: -2.156640733221061
- type: nauc_mrr_at_1000_diff1
value: 54.504759468380705
- type: nauc_mrr_at_1000_max
value: 40.58849492650406
- type: nauc_mrr_at_1000_std
value: 1.8226622175866118
- type: nauc_mrr_at_100_diff1
value: 54.4918034449886
- type: nauc_mrr_at_100_max
value: 40.59202728933427
- type: nauc_mrr_at_100_std
value: 1.8276428096536335
- type: nauc_mrr_at_10_diff1
value: 54.33603399493329
- type: nauc_mrr_at_10_max
value: 40.58896878978089
- type: nauc_mrr_at_10_std
value: 1.5733340909114375
- type: nauc_mrr_at_1_diff1
value: 58.062410036466105
- type: nauc_mrr_at_1_max
value: 37.660958859966506
- type: nauc_mrr_at_1_std
value: 0.029007600674170648
- type: nauc_mrr_at_20_diff1
value: 54.43793386924358
- type: nauc_mrr_at_20_max
value: 40.66773423875307
- type: nauc_mrr_at_20_std
value: 1.891967891797154
- type: nauc_mrr_at_3_diff1
value: 54.77901284537966
- type: nauc_mrr_at_3_max
value: 40.182219821206964
- type: nauc_mrr_at_3_std
value: 0.8911935034597871
- type: nauc_mrr_at_5_diff1
value: 54.466068837163675
- type: nauc_mrr_at_5_max
value: 40.334996916684126
- type: nauc_mrr_at_5_std
value: 0.9460830492892364
- type: nauc_ndcg_at_1000_diff1
value: 53.8465376860938
- type: nauc_ndcg_at_1000_max
value: 41.63158111016696
- type: nauc_ndcg_at_1000_std
value: 3.864205884257578
- type: nauc_ndcg_at_100_diff1
value: 53.4025864436944
- type: nauc_ndcg_at_100_max
value: 41.805453995307914
- type: nauc_ndcg_at_100_std
value: 4.36777557904857
- type: nauc_ndcg_at_10_diff1
value: 52.96034987157544
- type: nauc_ndcg_at_10_max
value: 40.7601173480795
- type: nauc_ndcg_at_10_std
value: 1.905824035879141
- type: nauc_ndcg_at_1_diff1
value: 58.062410036466105
- type: nauc_ndcg_at_1_max
value: 37.660958859966506
- type: nauc_ndcg_at_1_std
value: 0.029007600674170648
- type: nauc_ndcg_at_20_diff1
value: 53.2834771889242
- type: nauc_ndcg_at_20_max
value: 41.713541932946406
- type: nauc_ndcg_at_20_std
value: 3.865102828793311
- type: nauc_ndcg_at_3_diff1
value: 54.03389464372289
- type: nauc_ndcg_at_3_max
value: 38.41449914649933
- type: nauc_ndcg_at_3_std
value: -0.886276189886313
- type: nauc_ndcg_at_5_diff1
value: 53.456413320299
- type: nauc_ndcg_at_5_max
value: 39.49048882649335
- type: nauc_ndcg_at_5_std
value: -0.42692690160443814
- type: nauc_precision_at_1000_diff1
value: -14.770791653274824
- type: nauc_precision_at_1000_max
value: 21.479874538905246
- type: nauc_precision_at_1000_std
value: 28.607024261300207
- type: nauc_precision_at_100_diff1
value: -12.189696449878126
- type: nauc_precision_at_100_max
value: 26.69785787492456
- type: nauc_precision_at_100_std
value: 33.59098307467553
- type: nauc_precision_at_10_diff1
value: 6.922968330978399
- type: nauc_precision_at_10_max
value: 34.52138344123087
- type: nauc_precision_at_10_std
value: 21.768427637079952
- type: nauc_precision_at_1_diff1
value: 58.062410036466105
- type: nauc_precision_at_1_max
value: 37.660958859966506
- type: nauc_precision_at_1_std
value: 0.029007600674170648
- type: nauc_precision_at_20_diff1
value: -0.6837867902179278
- type: nauc_precision_at_20_max
value: 33.98683709011133
- type: nauc_precision_at_20_std
value: 30.8845561918902
- type: nauc_precision_at_3_diff1
value: 28.195043041120847
- type: nauc_precision_at_3_max
value: 37.659916094938836
- type: nauc_precision_at_3_std
value: 7.226520146634867
- type: nauc_precision_at_5_diff1
value: 16.633667288096245
- type: nauc_precision_at_5_max
value: 34.90176597404891
- type: nauc_precision_at_5_std
value: 12.421585442334088
- type: nauc_recall_at_1000_diff1
value: 45.20743732415397
- type: nauc_recall_at_1000_max
value: 72.77115913579242
- type: nauc_recall_at_1000_std
value: 70.48328496679083
- type: nauc_recall_at_100_diff1
value: 38.56282680810794
- type: nauc_recall_at_100_max
value: 55.46797683321103
- type: nauc_recall_at_100_std
value: 36.878791151929136
- type: nauc_recall_at_10_diff1
value: 44.18252051452362
- type: nauc_recall_at_10_max
value: 43.33391810040086
- type: nauc_recall_at_10_std
value: 6.663378192277723
- type: nauc_recall_at_1_diff1
value: 58.74017164752485
- type: nauc_recall_at_1_max
value: 31.528974862589585
- type: nauc_recall_at_1_std
value: -3.273824691929492
- type: nauc_recall_at_20_diff1
value: 44.19944231642417
- type: nauc_recall_at_20_max
value: 49.401101483915866
- type: nauc_recall_at_20_std
value: 18.97803841673839
- type: nauc_recall_at_3_diff1
value: 49.56378985428704
- type: nauc_recall_at_3_max
value: 36.434210616870224
- type: nauc_recall_at_3_std
value: -2.850559971607616
- type: nauc_recall_at_5_diff1
value: 47.37107217086109
- type: nauc_recall_at_5_max
value: 39.0236745509895
- type: nauc_recall_at_5_std
value: -1.7402454457937195
- type: ndcg_at_1
value: 50.031000000000006
- type: ndcg_at_10
value: 62.686
- type: ndcg_at_100
value: 66.403
- type: ndcg_at_1000
value: 67.241
- type: ndcg_at_20
value: 64.37899999999999
- type: ndcg_at_3
value: 57.859
- type: ndcg_at_5
value: 60.375
- type: precision_at_1
value: 50.031000000000006
- type: precision_at_10
value: 9.856
- type: precision_at_100
value: 1.266
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_20
value: 5.489
- type: precision_at_3
value: 25.746999999999996
- type: precision_at_5
value: 17.492
- type: recall_at_1
value: 43.856
- type: recall_at_10
value: 75.824
- type: recall_at_100
value: 91.622
- type: recall_at_1000
value: 97.538
- type: recall_at_20
value: 81.951
- type: recall_at_3
value: 63.016000000000005
- type: recall_at_5
value: 69.18299999999999
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: mteb/cqadupstack-gis
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: main_score
value: 43.983
- type: map_at_1
value: 28.942
- type: map_at_10
value: 38.621
- type: map_at_100
value: 39.7
- type: map_at_1000
value: 39.766
- type: map_at_20
value: 39.262
- type: map_at_3
value: 35.719
- type: map_at_5
value: 37.378
- type: mrr_at_1
value: 31.29943502824859
- type: mrr_at_10
value: 40.76463994260603
- type: mrr_at_100
value: 41.67073617629083
- type: mrr_at_1000
value: 41.717446259457105
- type: mrr_at_20
value: 41.32577374689195
- type: mrr_at_3
value: 37.984934086628996
- type: mrr_at_5
value: 39.64595103578152
- type: nauc_map_at_1000_diff1
value: 43.64461679688985
- type: nauc_map_at_1000_max
value: 31.53717883948204
- type: nauc_map_at_1000_std
value: 1.193745788248017
- type: nauc_map_at_100_diff1
value: 43.63847825079489
- type: nauc_map_at_100_max
value: 31.536602619279165
- type: nauc_map_at_100_std
value: 1.2001240243342401
- type: nauc_map_at_10_diff1
value: 43.845991987142014
- type: nauc_map_at_10_max
value: 31.27509937344113
- type: nauc_map_at_10_std
value: 0.7327934840520994
- type: nauc_map_at_1_diff1
value: 50.62269273984579
- type: nauc_map_at_1_max
value: 30.16325757909521
- type: nauc_map_at_1_std
value: -0.6398875136233392
- type: nauc_map_at_20_diff1
value: 43.630758403790914
- type: nauc_map_at_20_max
value: 31.408258098047703
- type: nauc_map_at_20_std
value: 1.12616034652217
- type: nauc_map_at_3_diff1
value: 44.823493567359456
- type: nauc_map_at_3_max
value: 31.075886347614496
- type: nauc_map_at_3_std
value: -0.25126874515735426
- type: nauc_map_at_5_diff1
value: 43.79768853087658
- type: nauc_map_at_5_max
value: 31.091080995725324
- type: nauc_map_at_5_std
value: 0.16440771782544047
- type: nauc_mrr_at_1000_diff1
value: 42.7865400752329
- type: nauc_mrr_at_1000_max
value: 32.84731670326893
- type: nauc_mrr_at_1000_std
value: 2.6067637582013825
- type: nauc_mrr_at_100_diff1
value: 42.771741548331065
- type: nauc_mrr_at_100_max
value: 32.85324232845987
- type: nauc_mrr_at_100_std
value: 2.6092786694308376
- type: nauc_mrr_at_10_diff1
value: 42.82969738870672
- type: nauc_mrr_at_10_max
value: 32.69407549631432
- type: nauc_mrr_at_10_std
value: 2.302903910016054
- type: nauc_mrr_at_1_diff1
value: 49.05638333657571
- type: nauc_mrr_at_1_max
value: 33.12030717171514
- type: nauc_mrr_at_1_std
value: 1.3278035087690774
- type: nauc_mrr_at_20_diff1
value: 42.74267239536286
- type: nauc_mrr_at_20_max
value: 32.78571108973092
- type: nauc_mrr_at_20_std
value: 2.5932669908758643
- type: nauc_mrr_at_3_diff1
value: 43.69963426089187
- type: nauc_mrr_at_3_max
value: 32.78193126956233
- type: nauc_mrr_at_3_std
value: 1.634874463134699
- type: nauc_mrr_at_5_diff1
value: 42.838630647832524
- type: nauc_mrr_at_5_max
value: 32.459318735260545
- type: nauc_mrr_at_5_std
value: 1.9412518283209172
- type: nauc_ndcg_at_1000_diff1
value: 41.01253839851583
- type: nauc_ndcg_at_1000_max
value: 32.69570568894237
- type: nauc_ndcg_at_1000_std
value: 3.4254737113410343
- type: nauc_ndcg_at_100_diff1
value: 40.62589243745832
- type: nauc_ndcg_at_100_max
value: 32.664990655736126
- type: nauc_ndcg_at_100_std
value: 3.799569445326048
- type: nauc_ndcg_at_10_diff1
value: 41.31658753735306
- type: nauc_ndcg_at_10_max
value: 31.511946320339295
- type: nauc_ndcg_at_10_std
value: 2.0492930500796662
- type: nauc_ndcg_at_1_diff1
value: 49.05638333657571
- type: nauc_ndcg_at_1_max
value: 33.12030717171514
- type: nauc_ndcg_at_1_std
value: 1.3278035087690774
- type: nauc_ndcg_at_20_diff1
value: 40.66188223212841
- type: nauc_ndcg_at_20_max
value: 31.926240431497476
- type: nauc_ndcg_at_20_std
value: 3.370398664595343
- type: nauc_ndcg_at_3_diff1
value: 43.035580180241
- type: nauc_ndcg_at_3_max
value: 31.363874129878404
- type: nauc_ndcg_at_3_std
value: 0.1422507242819929
- type: nauc_ndcg_at_5_diff1
value: 41.29049003955878
- type: nauc_ndcg_at_5_max
value: 31.112034994977737
- type: nauc_ndcg_at_5_std
value: 0.860179279828966
- type: nauc_precision_at_1000_diff1
value: -12.41854465881981
- type: nauc_precision_at_1000_max
value: 14.706779246590548
- type: nauc_precision_at_1000_std
value: 9.812804367375206
- type: nauc_precision_at_100_diff1
value: 2.797520107808461
- type: nauc_precision_at_100_max
value: 24.335873541811406
- type: nauc_precision_at_100_std
value: 12.87186398750545
- type: nauc_precision_at_10_diff1
value: 24.530962799265847
- type: nauc_precision_at_10_max
value: 31.00772010798733
- type: nauc_precision_at_10_std
value: 6.696733001548185
- type: nauc_precision_at_1_diff1
value: 49.05638333657571
- type: nauc_precision_at_1_max
value: 33.12030717171514
- type: nauc_precision_at_1_std
value: 1.3278035087690774
- type: nauc_precision_at_20_diff1
value: 16.25028416351204
- type: nauc_precision_at_20_max
value: 29.629326492027342
- type: nauc_precision_at_20_std
value: 11.085888573121679
- type: nauc_precision_at_3_diff1
value: 33.923667689694256
- type: nauc_precision_at_3_max
value: 33.5859782361996
- type: nauc_precision_at_3_std
value: 1.9468331086918693
- type: nauc_precision_at_5_diff1
value: 27.917827233088875
- type: nauc_precision_at_5_max
value: 33.13290043423535
- type: nauc_precision_at_5_std
value: 3.800870695945311
- type: nauc_recall_at_1000_diff1
value: 9.680283388428789
- type: nauc_recall_at_1000_max
value: 49.479399284871235
- type: nauc_recall_at_1000_std
value: 31.506985071436088
- type: nauc_recall_at_100_diff1
value: 23.607673377885448
- type: nauc_recall_at_100_max
value: 36.637750366403935
- type: nauc_recall_at_100_std
value: 18.30770690564224
- type: nauc_recall_at_10_diff1
value: 33.199683418312446
- type: nauc_recall_at_10_max
value: 29.63115497012312
- type: nauc_recall_at_10_std
value: 4.813200391480566
- type: nauc_recall_at_1_diff1
value: 50.62269273984579
- type: nauc_recall_at_1_max
value: 30.16325757909521
- type: nauc_recall_at_1_std
value: -0.6398875136233392
- type: nauc_recall_at_20_diff1
value: 29.16488387844995
- type: nauc_recall_at_20_max
value: 30.788019479459
- type: nauc_recall_at_20_std
value: 11.031953917298853
- type: nauc_recall_at_3_diff1
value: 38.215351600417065
- type: nauc_recall_at_3_max
value: 29.619887154236128
- type: nauc_recall_at_3_std
value: -0.13237298980339363
- type: nauc_recall_at_5_diff1
value: 33.93788042633265
- type: nauc_recall_at_5_max
value: 28.67185092656741
- type: nauc_recall_at_5_std
value: 1.316700201091445
- type: ndcg_at_1
value: 31.299
- type: ndcg_at_10
value: 43.983
- type: ndcg_at_100
value: 48.992999999999995
- type: ndcg_at_1000
value: 50.757
- type: ndcg_at_20
value: 46.152
- type: ndcg_at_3
value: 38.367000000000004
- type: ndcg_at_5
value: 41.171
- type: precision_at_1
value: 31.299
- type: precision_at_10
value: 6.734
- type: precision_at_100
value: 0.972
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_20
value: 3.898
- type: precision_at_3
value: 16.121
- type: precision_at_5
value: 11.344999999999999
- type: recall_at_1
value: 28.942
- type: recall_at_10
value: 58.343999999999994
- type: recall_at_100
value: 80.82300000000001
- type: recall_at_1000
value: 94.348
- type: recall_at_20
value: 66.449
- type: recall_at_3
value: 43.415
- type: recall_at_5
value: 50.007999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: mteb/cqadupstack-mathematica
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: main_score
value: 33.144
- type: map_at_1
value: 19.41
- type: map_at_10
value: 27.802
- type: map_at_100
value: 29.157
- type: map_at_1000
value: 29.274
- type: map_at_20
value: 28.549000000000003
- type: map_at_3
value: 25.052999999999997
- type: map_at_5
value: 26.521
- type: mrr_at_1
value: 23.756218905472636
- type: mrr_at_10
value: 32.3623450209271
- type: mrr_at_100
value: 33.3648208444617
- type: mrr_at_1000
value: 33.427688215162185
- type: mrr_at_20
value: 32.93723485575758
- type: mrr_at_3
value: 29.539800995024883
- type: mrr_at_5
value: 31.156716417910452
- type: nauc_map_at_1000_diff1
value: 36.196391248081284
- type: nauc_map_at_1000_max
value: 25.650644367091495
- type: nauc_map_at_1000_std
value: 6.130340697729844
- type: nauc_map_at_100_diff1
value: 36.138890642411376
- type: nauc_map_at_100_max
value: 25.587124763888518
- type: nauc_map_at_100_std
value: 6.129336379055536
- type: nauc_map_at_10_diff1
value: 36.254426743566775
- type: nauc_map_at_10_max
value: 25.465599906543034
- type: nauc_map_at_10_std
value: 5.880280378112879
- type: nauc_map_at_1_diff1
value: 42.890551563179976
- type: nauc_map_at_1_max
value: 25.813805281076956
- type: nauc_map_at_1_std
value: 5.150718386163028
- type: nauc_map_at_20_diff1
value: 35.98551587974314
- type: nauc_map_at_20_max
value: 25.501540521726636
- type: nauc_map_at_20_std
value: 5.858703157458749
- type: nauc_map_at_3_diff1
value: 37.646558039577734
- type: nauc_map_at_3_max
value: 26.138491471124247
- type: nauc_map_at_3_std
value: 6.0487505175540734
- type: nauc_map_at_5_diff1
value: 36.817582976153695
- type: nauc_map_at_5_max
value: 25.398200211121146
- type: nauc_map_at_5_std
value: 6.31126763919522
- type: nauc_mrr_at_1000_diff1
value: 37.313544952847835
- type: nauc_mrr_at_1000_max
value: 26.96218532078988
- type: nauc_mrr_at_1000_std
value: 6.814359224654042
- type: nauc_mrr_at_100_diff1
value: 37.28104407653679
- type: nauc_mrr_at_100_max
value: 26.931243040477256
- type: nauc_mrr_at_100_std
value: 6.800500150841733
- type: nauc_mrr_at_10_diff1
value: 37.315832621275895
- type: nauc_mrr_at_10_max
value: 26.941454225978372
- type: nauc_mrr_at_10_std
value: 6.837046527796884
- type: nauc_mrr_at_1_diff1
value: 43.19904188582958
- type: nauc_mrr_at_1_max
value: 26.975620445904795
- type: nauc_mrr_at_1_std
value: 4.52071008581395
- type: nauc_mrr_at_20_diff1
value: 37.2200524790774
- type: nauc_mrr_at_20_max
value: 26.971494160765847
- type: nauc_mrr_at_20_std
value: 6.716431228783282
- type: nauc_mrr_at_3_diff1
value: 38.46236387340654
- type: nauc_mrr_at_3_max
value: 27.846812992192056
- type: nauc_mrr_at_3_std
value: 6.550711872569794
- type: nauc_mrr_at_5_diff1
value: 37.620346007658476
- type: nauc_mrr_at_5_max
value: 27.031025952102038
- type: nauc_mrr_at_5_std
value: 7.32343760231163
- type: nauc_ndcg_at_1000_diff1
value: 34.95081314840592
- type: nauc_ndcg_at_1000_max
value: 26.89265465124325
- type: nauc_ndcg_at_1000_std
value: 7.854154466831975
- type: nauc_ndcg_at_100_diff1
value: 34.01417812563093
- type: nauc_ndcg_at_100_max
value: 25.792737746436835
- type: nauc_ndcg_at_100_std
value: 7.726584165493833
- type: nauc_ndcg_at_10_diff1
value: 33.895122516474466
- type: nauc_ndcg_at_10_max
value: 25.388442204589612
- type: nauc_ndcg_at_10_std
value: 6.359560223645991
- type: nauc_ndcg_at_1_diff1
value: 43.19904188582958
- type: nauc_ndcg_at_1_max
value: 26.975620445904795
- type: nauc_ndcg_at_1_std
value: 4.52071008581395
- type: nauc_ndcg_at_20_diff1
value: 33.36078689830245
- type: nauc_ndcg_at_20_max
value: 25.531794610571563
- type: nauc_ndcg_at_20_std
value: 6.136658608653248
- type: nauc_ndcg_at_3_diff1
value: 36.44505602530781
- type: nauc_ndcg_at_3_max
value: 26.9104071983157
- type: nauc_ndcg_at_3_std
value: 6.427178520371878
- type: nauc_ndcg_at_5_diff1
value: 35.01384323197442
- type: nauc_ndcg_at_5_max
value: 25.5560447088692
- type: nauc_ndcg_at_5_std
value: 7.3676236760360485
- type: nauc_precision_at_1000_diff1
value: 2.8903331041804514
- type: nauc_precision_at_1000_max
value: 4.059662742366004
- type: nauc_precision_at_1000_std
value: -1.5891687644008334
- type: nauc_precision_at_100_diff1
value: 8.437726471693766
- type: nauc_precision_at_100_max
value: 11.250588557568427
- type: nauc_precision_at_100_std
value: 4.231571164627862
- type: nauc_precision_at_10_diff1
value: 19.57085237210294
- type: nauc_precision_at_10_max
value: 20.973093492003905
- type: nauc_precision_at_10_std
value: 3.197416248152466
- type: nauc_precision_at_1_diff1
value: 43.19904188582958
- type: nauc_precision_at_1_max
value: 26.975620445904795
- type: nauc_precision_at_1_std
value: 4.52071008581395
- type: nauc_precision_at_20_diff1
value: 15.67136554192724
- type: nauc_precision_at_20_max
value: 17.706882621057858
- type: nauc_precision_at_20_std
value: 1.9363472182867714
- type: nauc_precision_at_3_diff1
value: 30.38035695042325
- type: nauc_precision_at_3_max
value: 26.48218693244094
- type: nauc_precision_at_3_std
value: 6.424657705785632
- type: nauc_precision_at_5_diff1
value: 25.272543315171458
- type: nauc_precision_at_5_max
value: 22.32441421311652
- type: nauc_precision_at_5_std
value: 7.4912569081905716
- type: nauc_recall_at_1000_diff1
value: 25.5748044137675
- type: nauc_recall_at_1000_max
value: 43.85796585370269
- type: nauc_recall_at_1000_std
value: 30.0338086596789
- type: nauc_recall_at_100_diff1
value: 22.577080638885093
- type: nauc_recall_at_100_max
value: 23.224511700617477
- type: nauc_recall_at_100_std
value: 15.187963852289313
- type: nauc_recall_at_10_diff1
value: 25.058592299355908
- type: nauc_recall_at_10_max
value: 22.24448483279841
- type: nauc_recall_at_10_std
value: 6.3179089740052765
- type: nauc_recall_at_1_diff1
value: 42.890551563179976
- type: nauc_recall_at_1_max
value: 25.813805281076956
- type: nauc_recall_at_1_std
value: 5.150718386163028
- type: nauc_recall_at_20_diff1
value: 22.433865123187307
- type: nauc_recall_at_20_max
value: 22.739695641511762
- type: nauc_recall_at_20_std
value: 5.362005125538497
- type: nauc_recall_at_3_diff1
value: 32.17919168998616
- type: nauc_recall_at_3_max
value: 26.044028436867357
- type: nauc_recall_at_3_std
value: 7.420349884006329
- type: nauc_recall_at_5_diff1
value: 28.967104573649138
- type: nauc_recall_at_5_max
value: 23.40865848168201
- type: nauc_recall_at_5_std
value: 9.174406147723621
- type: ndcg_at_1
value: 23.756
- type: ndcg_at_10
value: 33.144
- type: ndcg_at_100
value: 39.261
- type: ndcg_at_1000
value: 41.881
- type: ndcg_at_20
value: 35.56
- type: ndcg_at_3
value: 27.927999999999997
- type: ndcg_at_5
value: 30.293999999999997
- type: precision_at_1
value: 23.756
- type: precision_at_10
value: 5.995
- type: precision_at_100
value: 1.053
- type: precision_at_1000
value: 0.14100000000000001
- type: precision_at_20
value: 3.688
- type: precision_at_3
value: 13.059999999999999
- type: precision_at_5
value: 9.602
- type: recall_at_1
value: 19.41
- type: recall_at_10
value: 45.074
- type: recall_at_100
value: 71.131
- type: recall_at_1000
value: 89.604
- type: recall_at_20
value: 53.673
- type: recall_at_3
value: 31.055
- type: recall_at_5
value: 36.714999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: mteb/cqadupstack-physics
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: main_score
value: 49.675000000000004
- type: map_at_1
value: 33.178999999999995
- type: map_at_10
value: 43.807
- type: map_at_100
value: 45.17
- type: map_at_1000
value: 45.271
- type: map_at_20
value: 44.516
- type: map_at_3
value: 40.813
- type: map_at_5
value: 42.457
- type: mrr_at_1
value: 40.32723772858518
- type: mrr_at_10
value: 49.646867409138814
- type: mrr_at_100
value: 50.493686101426285
- type: mrr_at_1000
value: 50.525386961808834
- type: mrr_at_20
value: 50.120274354884586
- type: mrr_at_3
value: 47.49759384023096
- type: mrr_at_5
value: 48.72473532242535
- type: nauc_map_at_1000_diff1
value: 49.5947127786396
- type: nauc_map_at_1000_max
value: 33.39720045844929
- type: nauc_map_at_1000_std
value: -3.131428593252271
- type: nauc_map_at_100_diff1
value: 49.57797867324617
- type: nauc_map_at_100_max
value: 33.356927974709464
- type: nauc_map_at_100_std
value: -3.1661365376766337
- type: nauc_map_at_10_diff1
value: 49.59294630598952
- type: nauc_map_at_10_max
value: 32.86647346990462
- type: nauc_map_at_10_std
value: -4.1582043443386745
- type: nauc_map_at_1_diff1
value: 53.98646767288695
- type: nauc_map_at_1_max
value: 29.45629077638936
- type: nauc_map_at_1_std
value: -5.621187380771589
- type: nauc_map_at_20_diff1
value: 49.486982890447074
- type: nauc_map_at_20_max
value: 33.11681933406332
- type: nauc_map_at_20_std
value: -3.5826433195146854
- type: nauc_map_at_3_diff1
value: 50.81807107491861
- type: nauc_map_at_3_max
value: 32.32552291988859
- type: nauc_map_at_3_std
value: -3.952946504088928
- type: nauc_map_at_5_diff1
value: 49.70201354274439
- type: nauc_map_at_5_max
value: 32.831846031004886
- type: nauc_map_at_5_std
value: -3.8330488624207737
- type: nauc_mrr_at_1000_diff1
value: 49.04159472507738
- type: nauc_mrr_at_1000_max
value: 35.617600171138676
- type: nauc_mrr_at_1000_std
value: -1.5975830757486646
- type: nauc_mrr_at_100_diff1
value: 49.03848471692094
- type: nauc_mrr_at_100_max
value: 35.61936748662614
- type: nauc_mrr_at_100_std
value: -1.5922053398594729
- type: nauc_mrr_at_10_diff1
value: 48.92463964652612
- type: nauc_mrr_at_10_max
value: 35.37757708992045
- type: nauc_mrr_at_10_std
value: -2.2052028139567303
- type: nauc_mrr_at_1_diff1
value: 52.23915787290734
- type: nauc_mrr_at_1_max
value: 34.393531787632334
- type: nauc_mrr_at_1_std
value: -1.452007661016969
- type: nauc_mrr_at_20_diff1
value: 48.91168438018404
- type: nauc_mrr_at_20_max
value: 35.478962544421876
- type: nauc_mrr_at_20_std
value: -1.8246048423555414
- type: nauc_mrr_at_3_diff1
value: 50.115432665442164
- type: nauc_mrr_at_3_max
value: 35.89093796085569
- type: nauc_mrr_at_3_std
value: -1.4895016313153366
- type: nauc_mrr_at_5_diff1
value: 49.04321261351915
- type: nauc_mrr_at_5_max
value: 35.85730520949451
- type: nauc_mrr_at_5_std
value: -1.68790556880753
- type: nauc_ndcg_at_1000_diff1
value: 48.294697499154374
- type: nauc_ndcg_at_1000_max
value: 35.167410242367595
- type: nauc_ndcg_at_1000_std
value: -0.6346078535914157
- type: nauc_ndcg_at_100_diff1
value: 48.025525283449014
- type: nauc_ndcg_at_100_max
value: 34.79288511776105
- type: nauc_ndcg_at_100_std
value: -0.7823403044086993
- type: nauc_ndcg_at_10_diff1
value: 47.70793258015258
- type: nauc_ndcg_at_10_max
value: 33.09558927880104
- type: nauc_ndcg_at_10_std
value: -4.7793864166260605
- type: nauc_ndcg_at_1_diff1
value: 52.23915787290734
- type: nauc_ndcg_at_1_max
value: 34.393531787632334
- type: nauc_ndcg_at_1_std
value: -1.452007661016969
- type: nauc_ndcg_at_20_diff1
value: 47.354286045074815
- type: nauc_ndcg_at_20_max
value: 33.686648806027975
- type: nauc_ndcg_at_20_std
value: -3.0189085132476556
- type: nauc_ndcg_at_3_diff1
value: 49.68805334316908
- type: nauc_ndcg_at_3_max
value: 34.196077748056496
- type: nauc_ndcg_at_3_std
value: -2.7167289163768436
- type: nauc_ndcg_at_5_diff1
value: 47.94474868912989
- type: nauc_ndcg_at_5_max
value: 34.00261603413051
- type: nauc_ndcg_at_5_std
value: -3.3541028103046115
- type: nauc_precision_at_1000_diff1
value: -12.0150100710755
- type: nauc_precision_at_1000_max
value: 5.332942816568796
- type: nauc_precision_at_1000_std
value: 14.543288479130458
- type: nauc_precision_at_100_diff1
value: -4.920332181588838
- type: nauc_precision_at_100_max
value: 14.42313332017491
- type: nauc_precision_at_100_std
value: 17.821953321018384
- type: nauc_precision_at_10_diff1
value: 14.70509089079217
- type: nauc_precision_at_10_max
value: 25.381887131649716
- type: nauc_precision_at_10_std
value: 5.226419288645675
- type: nauc_precision_at_1_diff1
value: 52.23915787290734
- type: nauc_precision_at_1_max
value: 34.393531787632334
- type: nauc_precision_at_1_std
value: -1.452007661016969
- type: nauc_precision_at_20_diff1
value: 6.312827641507564
- type: nauc_precision_at_20_max
value: 22.483038562271933
- type: nauc_precision_at_20_std
value: 11.368419856892416
- type: nauc_precision_at_3_diff1
value: 33.271443420273606
- type: nauc_precision_at_3_max
value: 33.571078182106675
- type: nauc_precision_at_3_std
value: 4.47382265155717
- type: nauc_precision_at_5_diff1
value: 23.43287104284656
- type: nauc_precision_at_5_max
value: 30.909085068105313
- type: nauc_precision_at_5_std
value: 5.545672049452433
- type: nauc_recall_at_1000_diff1
value: 35.22615594677707
- type: nauc_recall_at_1000_max
value: 52.0710533173532
- type: nauc_recall_at_1000_std
value: 45.17683523786464
- type: nauc_recall_at_100_diff1
value: 36.2169056956332
- type: nauc_recall_at_100_max
value: 35.02435003210817
- type: nauc_recall_at_100_std
value: 15.833632946282508
- type: nauc_recall_at_10_diff1
value: 39.12440292974848
- type: nauc_recall_at_10_max
value: 28.0546011979648
- type: nauc_recall_at_10_std
value: -9.620558638092172
- type: nauc_recall_at_1_diff1
value: 53.98646767288695
- type: nauc_recall_at_1_max
value: 29.45629077638936
- type: nauc_recall_at_1_std
value: -5.621187380771589
- type: nauc_recall_at_20_diff1
value: 36.39254630768161
- type: nauc_recall_at_20_max
value: 29.277856508751967
- type: nauc_recall_at_20_std
value: -3.048007490798412
- type: nauc_recall_at_3_diff1
value: 45.64706642644958
- type: nauc_recall_at_3_max
value: 31.003050159737413
- type: nauc_recall_at_3_std
value: -4.849763876930667
- type: nauc_recall_at_5_diff1
value: 40.918108859971746
- type: nauc_recall_at_5_max
value: 30.69907335071493
- type: nauc_recall_at_5_std
value: -6.1445436251916865
- type: ndcg_at_1
value: 40.327
- type: ndcg_at_10
value: 49.675000000000004
- type: ndcg_at_100
value: 55.364000000000004
- type: ndcg_at_1000
value: 56.992
- type: ndcg_at_20
value: 51.803999999999995
- type: ndcg_at_3
value: 45.227000000000004
- type: ndcg_at_5
value: 47.244
- type: precision_at_1
value: 40.327
- type: precision_at_10
value: 8.826
- type: precision_at_100
value: 1.354
- type: precision_at_1000
value: 0.167
- type: precision_at_20
value: 5.115
- type: precision_at_3
value: 21.303
- type: precision_at_5
value: 14.726
- type: recall_at_1
value: 33.178999999999995
- type: recall_at_10
value: 61.087
- type: recall_at_100
value: 85.099
- type: recall_at_1000
value: 95.14099999999999
- type: recall_at_20
value: 68.623
- type: recall_at_3
value: 48.245
- type: recall_at_5
value: 53.832
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: mteb/cqadupstack-programmers
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: main_score
value: 44.99
- type: map_at_1
value: 28.089
- type: map_at_10
value: 38.98
- type: map_at_100
value: 40.339000000000006
- type: map_at_1000
value: 40.441
- type: map_at_20
value: 39.702
- type: map_at_3
value: 35.620000000000005
- type: map_at_5
value: 37.657000000000004
- type: mrr_at_1
value: 35.15981735159817
- type: mrr_at_10
value: 44.54075161266937
- type: mrr_at_100
value: 45.435730392436646
- type: mrr_at_1000
value: 45.47673849356812
- type: mrr_at_20
value: 45.05949613726918
- type: mrr_at_3
value: 42.00913242009131
- type: mrr_at_5
value: 43.52739726027392
- type: nauc_map_at_1000_diff1
value: 42.6375513442399
- type: nauc_map_at_1000_max
value: 35.83899956589522
- type: nauc_map_at_1000_std
value: 5.798620017712549
- type: nauc_map_at_100_diff1
value: 42.609712253881504
- type: nauc_map_at_100_max
value: 35.85401871065736
- type: nauc_map_at_100_std
value: 5.829007296755533
- type: nauc_map_at_10_diff1
value: 42.90931172127824
- type: nauc_map_at_10_max
value: 35.46694204511423
- type: nauc_map_at_10_std
value: 5.131477704152026
- type: nauc_map_at_1_diff1
value: 48.066312177855956
- type: nauc_map_at_1_max
value: 30.67745267941573
- type: nauc_map_at_1_std
value: -1.4170737991670943
- type: nauc_map_at_20_diff1
value: 42.730423700784
- type: nauc_map_at_20_max
value: 35.710039616497085
- type: nauc_map_at_20_std
value: 5.363961887475162
- type: nauc_map_at_3_diff1
value: 43.499223646579935
- type: nauc_map_at_3_max
value: 33.872570039621564
- type: nauc_map_at_3_std
value: 3.0787571843453008
- type: nauc_map_at_5_diff1
value: 43.28963642946521
- type: nauc_map_at_5_max
value: 35.18327408279892
- type: nauc_map_at_5_std
value: 4.516467154662473
- type: nauc_mrr_at_1000_diff1
value: 42.71279871641341
- type: nauc_mrr_at_1000_max
value: 37.48825064817496
- type: nauc_mrr_at_1000_std
value: 8.10015025024314
- type: nauc_mrr_at_100_diff1
value: 42.694777404773376
- type: nauc_mrr_at_100_max
value: 37.476741768741086
- type: nauc_mrr_at_100_std
value: 8.11525130417229
- type: nauc_mrr_at_10_diff1
value: 42.954194054560176
- type: nauc_mrr_at_10_max
value: 37.606138578797506
- type: nauc_mrr_at_10_std
value: 8.092519513302399
- type: nauc_mrr_at_1_diff1
value: 48.350790286038574
- type: nauc_mrr_at_1_max
value: 33.97992759739641
- type: nauc_mrr_at_1_std
value: 1.8332987018664093
- type: nauc_mrr_at_20_diff1
value: 42.664983701783044
- type: nauc_mrr_at_20_max
value: 37.47450702110784
- type: nauc_mrr_at_20_std
value: 8.001067634745462
- type: nauc_mrr_at_3_diff1
value: 42.921968602737955
- type: nauc_mrr_at_3_max
value: 37.19599728791262
- type: nauc_mrr_at_3_std
value: 7.4692697422507575
- type: nauc_mrr_at_5_diff1
value: 42.96028546491891
- type: nauc_mrr_at_5_max
value: 37.688350071295915
- type: nauc_mrr_at_5_std
value: 8.213017954012372
- type: nauc_ndcg_at_1000_diff1
value: 40.70763263942397
- type: nauc_ndcg_at_1000_max
value: 37.87768319167602
- type: nauc_ndcg_at_1000_std
value: 9.908807071686738
- type: nauc_ndcg_at_100_diff1
value: 39.97828438221707
- type: nauc_ndcg_at_100_max
value: 37.7723393835996
- type: nauc_ndcg_at_100_std
value: 10.666779466040097
- type: nauc_ndcg_at_10_diff1
value: 41.172233451172936
- type: nauc_ndcg_at_10_max
value: 37.12252131573939
- type: nauc_ndcg_at_10_std
value: 8.273798754436639
- type: nauc_ndcg_at_1_diff1
value: 48.350790286038574
- type: nauc_ndcg_at_1_max
value: 33.97992759739641
- type: nauc_ndcg_at_1_std
value: 1.8332987018664093
- type: nauc_ndcg_at_20_diff1
value: 40.33325895172716
- type: nauc_ndcg_at_20_max
value: 37.36015594019951
- type: nauc_ndcg_at_20_std
value: 8.818556108749302
- type: nauc_ndcg_at_3_diff1
value: 41.652701699747254
- type: nauc_ndcg_at_3_max
value: 35.499109874223294
- type: nauc_ndcg_at_3_std
value: 5.831784865606119
- type: nauc_ndcg_at_5_diff1
value: 41.856346892595475
- type: nauc_ndcg_at_5_max
value: 36.940681835687194
- type: nauc_ndcg_at_5_std
value: 7.507798515093516
- type: nauc_precision_at_1000_diff1
value: -2.4605367806784866
- type: nauc_precision_at_1000_max
value: -0.3538142127162922
- type: nauc_precision_at_1000_std
value: 8.369794961833236
- type: nauc_precision_at_100_diff1
value: -0.34954522096524704
- type: nauc_precision_at_100_max
value: 13.159909603146458
- type: nauc_precision_at_100_std
value: 19.425561514133996
- type: nauc_precision_at_10_diff1
value: 17.048304710148145
- type: nauc_precision_at_10_max
value: 29.816041846806375
- type: nauc_precision_at_10_std
value: 18.358893367243798
- type: nauc_precision_at_1_diff1
value: 48.350790286038574
- type: nauc_precision_at_1_max
value: 33.97992759739641
- type: nauc_precision_at_1_std
value: 1.8332987018664093
- type: nauc_precision_at_20_diff1
value: 10.450903599411344
- type: nauc_precision_at_20_max
value: 25.228916373799127
- type: nauc_precision_at_20_std
value: 18.46893569529936
- type: nauc_precision_at_3_diff1
value: 29.181236567048636
- type: nauc_precision_at_3_max
value: 35.64918262500281
- type: nauc_precision_at_3_std
value: 13.347538222514968
- type: nauc_precision_at_5_diff1
value: 23.693323840550345
- type: nauc_precision_at_5_max
value: 33.972399735191225
- type: nauc_precision_at_5_std
value: 17.107012760554618
- type: nauc_recall_at_1000_diff1
value: 20.297340483227945
- type: nauc_recall_at_1000_max
value: 63.084305970127275
- type: nauc_recall_at_1000_std
value: 63.04655000858784
- type: nauc_recall_at_100_diff1
value: 22.587332148979723
- type: nauc_recall_at_100_max
value: 40.740968468024775
- type: nauc_recall_at_100_std
value: 34.120423684507124
- type: nauc_recall_at_10_diff1
value: 33.361195948673675
- type: nauc_recall_at_10_max
value: 37.1411402410262
- type: nauc_recall_at_10_std
value: 13.475407196166259
- type: nauc_recall_at_1_diff1
value: 48.066312177855956
- type: nauc_recall_at_1_max
value: 30.67745267941573
- type: nauc_recall_at_1_std
value: -1.4170737991670943
- type: nauc_recall_at_20_diff1
value: 28.703982984383984
- type: nauc_recall_at_20_max
value: 37.32929431193496
- type: nauc_recall_at_20_std
value: 16.139135347989903
- type: nauc_recall_at_3_diff1
value: 36.53346179134789
- type: nauc_recall_at_3_max
value: 34.11397914899309
- type: nauc_recall_at_3_std
value: 7.19358019807132
- type: nauc_recall_at_5_diff1
value: 36.24058894947452
- type: nauc_recall_at_5_max
value: 37.00990358651097
- type: nauc_recall_at_5_std
value: 11.074645476821619
- type: ndcg_at_1
value: 35.160000000000004
- type: ndcg_at_10
value: 44.99
- type: ndcg_at_100
value: 50.661
- type: ndcg_at_1000
value: 52.599
- type: ndcg_at_20
value: 47.154
- type: ndcg_at_3
value: 39.843
- type: ndcg_at_5
value: 42.486000000000004
- type: precision_at_1
value: 35.160000000000004
- type: precision_at_10
value: 8.299
- type: precision_at_100
value: 1.2850000000000001
- type: precision_at_1000
value: 0.16199999999999998
- type: precision_at_20
value: 4.84
- type: precision_at_3
value: 19.178
- type: precision_at_5
value: 13.927
- type: recall_at_1
value: 28.089
- type: recall_at_10
value: 57.158
- type: recall_at_100
value: 81.461
- type: recall_at_1000
value: 94.46900000000001
- type: recall_at_20
value: 64.927
- type: recall_at_3
value: 42.775999999999996
- type: recall_at_5
value: 49.719
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: CQADupstackRetrieval is a combined dataset
metrics:
- type: main_score
value: 44.989166666666655
- type: ndcg_at_10
value: 44.989166666666655
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: mteb/cqadupstack-stats
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: main_score
value: 39.586
- type: map_at_1
value: 27.301
- type: map_at_10
value: 35.022
- type: map_at_100
value: 36.061
- type: map_at_1000
value: 36.146
- type: map_at_20
value: 35.608000000000004
- type: map_at_3
value: 32.978
- type: map_at_5
value: 33.994
- type: mrr_at_1
value: 30.67484662576687
- type: mrr_at_10
value: 38.1696124257474
- type: mrr_at_100
value: 38.99730898994137
- type: mrr_at_1000
value: 39.049871007408136
- type: mrr_at_20
value: 38.62424051396064
- type: mrr_at_3
value: 36.40081799591004
- type: mrr_at_5
value: 37.23670756646219
- type: nauc_map_at_1000_diff1
value: 50.4395097150819
- type: nauc_map_at_1000_max
value: 42.36231476768413
- type: nauc_map_at_1000_std
value: 1.0739414045485742
- type: nauc_map_at_100_diff1
value: 50.4253775421283
- type: nauc_map_at_100_max
value: 42.34508969348633
- type: nauc_map_at_100_std
value: 1.0590256535050135
- type: nauc_map_at_10_diff1
value: 50.74196619464362
- type: nauc_map_at_10_max
value: 42.354326434590284
- type: nauc_map_at_10_std
value: 0.6330167542705694
- type: nauc_map_at_1_diff1
value: 55.7404810490963
- type: nauc_map_at_1_max
value: 40.7676941648045
- type: nauc_map_at_1_std
value: -5.021772566610674
- type: nauc_map_at_20_diff1
value: 50.39792463598886
- type: nauc_map_at_20_max
value: 42.25768760228577
- type: nauc_map_at_20_std
value: 0.8979017700131807
- type: nauc_map_at_3_diff1
value: 51.53267996170815
- type: nauc_map_at_3_max
value: 41.78801756883417
- type: nauc_map_at_3_std
value: -0.6652383024396911
- type: nauc_map_at_5_diff1
value: 50.992783683271504
- type: nauc_map_at_5_max
value: 41.8607977828188
- type: nauc_map_at_5_std
value: 0.3484379897869807
- type: nauc_mrr_at_1000_diff1
value: 48.952907124445126
- type: nauc_mrr_at_1000_max
value: 42.93563741482114
- type: nauc_mrr_at_1000_std
value: 3.0791495753556424
- type: nauc_mrr_at_100_diff1
value: 48.941921107360805
- type: nauc_mrr_at_100_max
value: 42.94419657374061
- type: nauc_mrr_at_100_std
value: 3.075397087180154
- type: nauc_mrr_at_10_diff1
value: 49.098926306303056
- type: nauc_mrr_at_10_max
value: 42.941857820499806
- type: nauc_mrr_at_10_std
value: 2.8184474174054372
- type: nauc_mrr_at_1_diff1
value: 54.428109877009334
- type: nauc_mrr_at_1_max
value: 42.50273386972492
- type: nauc_mrr_at_1_std
value: -2.1811826216412187
- type: nauc_mrr_at_20_diff1
value: 48.82502192775839
- type: nauc_mrr_at_20_max
value: 42.92227277257095
- type: nauc_mrr_at_20_std
value: 2.975812634368533
- type: nauc_mrr_at_3_diff1
value: 49.440009227591176
- type: nauc_mrr_at_3_max
value: 42.95503176290712
- type: nauc_mrr_at_3_std
value: 2.2997128945013796
- type: nauc_mrr_at_5_diff1
value: 49.09846782701398
- type: nauc_mrr_at_5_max
value: 42.51449168285772
- type: nauc_mrr_at_5_std
value: 2.7785816484421297
- type: nauc_ndcg_at_1000_diff1
value: 48.14680758187888
- type: nauc_ndcg_at_1000_max
value: 43.57465718500695
- type: nauc_ndcg_at_1000_std
value: 5.287435676678261
- type: nauc_ndcg_at_100_diff1
value: 47.66081605743284
- type: nauc_ndcg_at_100_max
value: 43.28156751251163
- type: nauc_ndcg_at_100_std
value: 4.959626409663624
- type: nauc_ndcg_at_10_diff1
value: 48.25075619623878
- type: nauc_ndcg_at_10_max
value: 43.00688660666578
- type: nauc_ndcg_at_10_std
value: 3.2319193368891637
- type: nauc_ndcg_at_1_diff1
value: 54.428109877009334
- type: nauc_ndcg_at_1_max
value: 42.50273386972492
- type: nauc_ndcg_at_1_std
value: -2.1811826216412187
- type: nauc_ndcg_at_20_diff1
value: 47.1943098627403
- type: nauc_ndcg_at_20_max
value: 42.86954491768707
- type: nauc_ndcg_at_20_std
value: 4.08583080150737
- type: nauc_ndcg_at_3_diff1
value: 49.32681523192246
- type: nauc_ndcg_at_3_max
value: 42.46898641470274
- type: nauc_ndcg_at_3_std
value: 1.7416962407725236
- type: nauc_ndcg_at_5_diff1
value: 48.59647012439291
- type: nauc_ndcg_at_5_max
value: 42.07098889846439
- type: nauc_ndcg_at_5_std
value: 2.979621233356828
- type: nauc_precision_at_1000_diff1
value: -1.7366334161587105
- type: nauc_precision_at_1000_max
value: 17.70969166396819
- type: nauc_precision_at_1000_std
value: 17.50619975322144
- type: nauc_precision_at_100_diff1
value: 10.082579982582155
- type: nauc_precision_at_100_max
value: 28.024893516091776
- type: nauc_precision_at_100_std
value: 18.41413013357596
- type: nauc_precision_at_10_diff1
value: 28.796167732373657
- type: nauc_precision_at_10_max
value: 40.37340024485382
- type: nauc_precision_at_10_std
value: 13.718572711091733
- type: nauc_precision_at_1_diff1
value: 54.428109877009334
- type: nauc_precision_at_1_max
value: 42.50273386972492
- type: nauc_precision_at_1_std
value: -2.1811826216412187
- type: nauc_precision_at_20_diff1
value: 19.82691920771315
- type: nauc_precision_at_20_max
value: 34.45075390159975
- type: nauc_precision_at_20_std
value: 16.410812072348058
- type: nauc_precision_at_3_diff1
value: 40.85430254962678
- type: nauc_precision_at_3_max
value: 43.63016056067074
- type: nauc_precision_at_3_std
value: 9.322014634477581
- type: nauc_precision_at_5_diff1
value: 35.830272848975795
- type: nauc_precision_at_5_max
value: 41.30047691620363
- type: nauc_precision_at_5_std
value: 13.145693992266565
- type: nauc_recall_at_1000_diff1
value: 35.532000545890504
- type: nauc_recall_at_1000_max
value: 50.714223194510325
- type: nauc_recall_at_1000_std
value: 43.09037309139045
- type: nauc_recall_at_100_diff1
value: 35.11024488875192
- type: nauc_recall_at_100_max
value: 43.0874566265193
- type: nauc_recall_at_100_std
value: 19.70628521846854
- type: nauc_recall_at_10_diff1
value: 40.36203726741153
- type: nauc_recall_at_10_max
value: 42.581482582576726
- type: nauc_recall_at_10_std
value: 8.642553371022348
- type: nauc_recall_at_1_diff1
value: 55.7404810490963
- type: nauc_recall_at_1_max
value: 40.7676941648045
- type: nauc_recall_at_1_std
value: -5.021772566610674
- type: nauc_recall_at_20_diff1
value: 35.97348868186562
- type: nauc_recall_at_20_max
value: 41.82695933305065
- type: nauc_recall_at_20_std
value: 11.444957541593585
- type: nauc_recall_at_3_diff1
value: 44.20020470014979
- type: nauc_recall_at_3_max
value: 40.84130855296979
- type: nauc_recall_at_3_std
value: 5.004883338558809
- type: nauc_recall_at_5_diff1
value: 42.08756885472078
- type: nauc_recall_at_5_max
value: 39.90323783606852
- type: nauc_recall_at_5_std
value: 8.085182534171127
- type: ndcg_at_1
value: 30.675
- type: ndcg_at_10
value: 39.586
- type: ndcg_at_100
value: 44.737
- type: ndcg_at_1000
value: 46.863
- type: ndcg_at_20
value: 41.495
- type: ndcg_at_3
value: 35.8
- type: ndcg_at_5
value: 37.3
- type: precision_at_1
value: 30.675
- type: precision_at_10
value: 6.196
- type: precision_at_100
value: 0.9570000000000001
- type: precision_at_1000
value: 0.122
- type: precision_at_20
value: 3.6350000000000002
- type: precision_at_3
value: 15.337
- type: precision_at_5
value: 10.337
- type: recall_at_1
value: 27.301
- type: recall_at_10
value: 50.346999999999994
- type: recall_at_100
value: 74.459
- type: recall_at_1000
value: 90.018
- type: recall_at_20
value: 57.473
- type: recall_at_3
value: 39.672000000000004
- type: recall_at_5
value: 43.383
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: mteb/cqadupstack-tex
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: main_score
value: 32.842
- type: map_at_1
value: 19.527
- type: map_at_10
value: 27.711999999999996
- type: map_at_100
value: 28.98
- type: map_at_1000
value: 29.108
- type: map_at_20
value: 28.407
- type: map_at_3
value: 25.023
- type: map_at_5
value: 26.528000000000002
- type: mrr_at_1
value: 23.675154852030282
- type: mrr_at_10
value: 31.810676323752784
- type: mrr_at_100
value: 32.788970614380716
- type: mrr_at_1000
value: 32.86028758975889
- type: mrr_at_20
value: 32.35935756676056
- type: mrr_at_3
value: 29.41615049323246
- type: mrr_at_5
value: 30.785730672172633
- type: nauc_map_at_1000_diff1
value: 35.597766688968015
- type: nauc_map_at_1000_max
value: 26.295790183159845
- type: nauc_map_at_1000_std
value: -0.04229904865958209
- type: nauc_map_at_100_diff1
value: 35.568782622469925
- type: nauc_map_at_100_max
value: 26.27850795471227
- type: nauc_map_at_100_std
value: -0.04944875782811099
- type: nauc_map_at_10_diff1
value: 35.63760937893694
- type: nauc_map_at_10_max
value: 26.130094042028233
- type: nauc_map_at_10_std
value: -0.6896882769027717
- type: nauc_map_at_1_diff1
value: 41.759098341890976
- type: nauc_map_at_1_max
value: 23.918885427783326
- type: nauc_map_at_1_std
value: -2.1383574897865074
- type: nauc_map_at_20_diff1
value: 35.55706530442612
- type: nauc_map_at_20_max
value: 26.23339626569677
- type: nauc_map_at_20_std
value: -0.162172033918129
- type: nauc_map_at_3_diff1
value: 37.22183376355153
- type: nauc_map_at_3_max
value: 25.770512522122186
- type: nauc_map_at_3_std
value: -1.3105892187778403
- type: nauc_map_at_5_diff1
value: 36.205913161663084
- type: nauc_map_at_5_max
value: 25.953300641502064
- type: nauc_map_at_5_std
value: -0.7987363137547906
- type: nauc_mrr_at_1000_diff1
value: 34.864016559617646
- type: nauc_mrr_at_1000_max
value: 26.8689525348564
- type: nauc_mrr_at_1000_std
value: -0.5839923973914446
- type: nauc_mrr_at_100_diff1
value: 34.83820469598538
- type: nauc_mrr_at_100_max
value: 26.864669056231282
- type: nauc_mrr_at_100_std
value: -0.5785645654158633
- type: nauc_mrr_at_10_diff1
value: 34.81868397381981
- type: nauc_mrr_at_10_max
value: 26.79988560460627
- type: nauc_mrr_at_10_std
value: -1.1113808365827318
- type: nauc_mrr_at_1_diff1
value: 40.0281507903504
- type: nauc_mrr_at_1_max
value: 25.036735941806583
- type: nauc_mrr_at_1_std
value: -2.508700799268523
- type: nauc_mrr_at_20_diff1
value: 34.81954537357966
- type: nauc_mrr_at_20_max
value: 26.877673033315453
- type: nauc_mrr_at_20_std
value: -0.6706028107452919
- type: nauc_mrr_at_3_diff1
value: 35.87313782549696
- type: nauc_mrr_at_3_max
value: 26.776261693392335
- type: nauc_mrr_at_3_std
value: -1.8010591328112908
- type: nauc_mrr_at_5_diff1
value: 35.31673912159536
- type: nauc_mrr_at_5_max
value: 26.78720786106881
- type: nauc_mrr_at_5_std
value: -1.3096326953900546
- type: nauc_ndcg_at_1000_diff1
value: 33.43105244339048
- type: nauc_ndcg_at_1000_max
value: 27.52195065724684
- type: nauc_ndcg_at_1000_std
value: 2.8376056562675744
- type: nauc_ndcg_at_100_diff1
value: 32.90916846420573
- type: nauc_ndcg_at_100_max
value: 27.27161017736065
- type: nauc_ndcg_at_100_std
value: 2.8703122625872126
- type: nauc_ndcg_at_10_diff1
value: 33.12714979317447
- type: nauc_ndcg_at_10_max
value: 26.67762031747992
- type: nauc_ndcg_at_10_std
value: -0.1341345572932233
- type: nauc_ndcg_at_1_diff1
value: 40.0281507903504
- type: nauc_ndcg_at_1_max
value: 25.036735941806583
- type: nauc_ndcg_at_1_std
value: -2.508700799268523
- type: nauc_ndcg_at_20_diff1
value: 32.891656138688546
- type: nauc_ndcg_at_20_max
value: 26.991976404027163
- type: nauc_ndcg_at_20_std
value: 1.6050741106677746
- type: nauc_ndcg_at_3_diff1
value: 35.576958713955484
- type: nauc_ndcg_at_3_max
value: 26.41687745899445
- type: nauc_ndcg_at_3_std
value: -1.5326687067002291
- type: nauc_ndcg_at_5_diff1
value: 34.27335619067276
- type: nauc_ndcg_at_5_max
value: 26.479515412084208
- type: nauc_ndcg_at_5_std
value: -0.5597648935666003
- type: nauc_precision_at_1000_diff1
value: -0.18660914306684007
- type: nauc_precision_at_1000_max
value: 7.268255385799229
- type: nauc_precision_at_1000_std
value: -0.1968875268478991
- type: nauc_precision_at_100_diff1
value: 7.386701205054449
- type: nauc_precision_at_100_max
value: 15.477735603019607
- type: nauc_precision_at_100_std
value: 4.753153414679307
- type: nauc_precision_at_10_diff1
value: 18.4668296945938
- type: nauc_precision_at_10_max
value: 25.457144217779597
- type: nauc_precision_at_10_std
value: 0.40165373733963605
- type: nauc_precision_at_1_diff1
value: 40.0281507903504
- type: nauc_precision_at_1_max
value: 25.036735941806583
- type: nauc_precision_at_1_std
value: -2.508700799268523
- type: nauc_precision_at_20_diff1
value: 14.751135844289335
- type: nauc_precision_at_20_max
value: 22.763373329576293
- type: nauc_precision_at_20_std
value: 4.360731801761864
- type: nauc_precision_at_3_diff1
value: 28.154753888265393
- type: nauc_precision_at_3_max
value: 27.838427033527147
- type: nauc_precision_at_3_std
value: -1.0042621266717804
- type: nauc_precision_at_5_diff1
value: 23.549026872711423
- type: nauc_precision_at_5_max
value: 27.192214745385044
- type: nauc_precision_at_5_std
value: 0.4455206110174471
- type: nauc_recall_at_1000_diff1
value: 17.905404210815632
- type: nauc_recall_at_1000_max
value: 32.8674418535776
- type: nauc_recall_at_1000_std
value: 35.187050415735435
- type: nauc_recall_at_100_diff1
value: 20.903609751984757
- type: nauc_recall_at_100_max
value: 27.180306691518364
- type: nauc_recall_at_100_std
value: 17.553030959393297
- type: nauc_recall_at_10_diff1
value: 25.615147693464387
- type: nauc_recall_at_10_max
value: 25.97062699453565
- type: nauc_recall_at_10_std
value: 2.2181702899826576
- type: nauc_recall_at_1_diff1
value: 41.759098341890976
- type: nauc_recall_at_1_max
value: 23.918885427783326
- type: nauc_recall_at_1_std
value: -2.1383574897865074
- type: nauc_recall_at_20_diff1
value: 23.922775940094386
- type: nauc_recall_at_20_max
value: 26.384627814902785
- type: nauc_recall_at_20_std
value: 7.944532403561578
- type: nauc_recall_at_3_diff1
value: 32.26543270634743
- type: nauc_recall_at_3_max
value: 26.36357710828272
- type: nauc_recall_at_3_std
value: -0.42723331708340706
- type: nauc_recall_at_5_diff1
value: 29.080464141763336
- type: nauc_recall_at_5_max
value: 25.81238438303652
- type: nauc_recall_at_5_std
value: 1.1649311168287726
- type: ndcg_at_1
value: 23.674999999999997
- type: ndcg_at_10
value: 32.842
- type: ndcg_at_100
value: 38.64
- type: ndcg_at_1000
value: 41.367
- type: ndcg_at_20
value: 35.032999999999994
- type: ndcg_at_3
value: 28.166000000000004
- type: ndcg_at_5
value: 30.407
- type: precision_at_1
value: 23.674999999999997
- type: precision_at_10
value: 6.005
- type: precision_at_100
value: 1.053
- type: precision_at_1000
value: 0.146
- type: precision_at_20
value: 3.6580000000000004
- type: precision_at_3
value: 13.352
- type: precision_at_5
value: 9.718
- type: recall_at_1
value: 19.527
- type: recall_at_10
value: 44.096999999999994
- type: recall_at_100
value: 69.962
- type: recall_at_1000
value: 89.035
- type: recall_at_20
value: 52.166000000000004
- type: recall_at_3
value: 30.946
- type: recall_at_5
value: 36.789
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: mteb/cqadupstack-unix
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: main_score
value: 46.54
- type: map_at_1
value: 29.953999999999997
- type: map_at_10
value: 40.742
- type: map_at_100
value: 41.964
- type: map_at_1000
value: 42.059999999999995
- type: map_at_20
value: 41.426
- type: map_at_3
value: 37.378
- type: map_at_5
value: 39.267
- type: mrr_at_1
value: 34.701492537313435
- type: mrr_at_10
value: 44.29978085761664
- type: mrr_at_100
value: 45.205551401915486
- type: mrr_at_1000
value: 45.24735017384963
- type: mrr_at_20
value: 44.85338423755729
- type: mrr_at_3
value: 41.57338308457707
- type: mrr_at_5
value: 43.19185323383077
- type: nauc_map_at_1000_diff1
value: 48.45170522932164
- type: nauc_map_at_1000_max
value: 31.544164363591204
- type: nauc_map_at_1000_std
value: 0.8661088818146858
- type: nauc_map_at_100_diff1
value: 48.47347800061323
- type: nauc_map_at_100_max
value: 31.568637596620313
- type: nauc_map_at_100_std
value: 0.9252699336843858
- type: nauc_map_at_10_diff1
value: 48.64849891585432
- type: nauc_map_at_10_max
value: 31.40371265579746
- type: nauc_map_at_10_std
value: 0.7088016563713089
- type: nauc_map_at_1_diff1
value: 53.57918993108331
- type: nauc_map_at_1_max
value: 31.392632653740993
- type: nauc_map_at_1_std
value: -2.857306170463933
- type: nauc_map_at_20_diff1
value: 48.49084353023969
- type: nauc_map_at_20_max
value: 31.470313174779374
- type: nauc_map_at_20_std
value: 0.8950296035234309
- type: nauc_map_at_3_diff1
value: 49.273481161619806
- type: nauc_map_at_3_max
value: 31.101471509782826
- type: nauc_map_at_3_std
value: -0.886510096257905
- type: nauc_map_at_5_diff1
value: 48.85344288229106
- type: nauc_map_at_5_max
value: 31.32633663238284
- type: nauc_map_at_5_std
value: -0.44752909698881177
- type: nauc_mrr_at_1000_diff1
value: 46.27593166906613
- type: nauc_mrr_at_1000_max
value: 31.637594372116336
- type: nauc_mrr_at_1000_std
value: 0.8444917550670064
- type: nauc_mrr_at_100_diff1
value: 46.27161543033672
- type: nauc_mrr_at_100_max
value: 31.64330655339695
- type: nauc_mrr_at_100_std
value: 0.8717446416398773
- type: nauc_mrr_at_10_diff1
value: 46.100348481312864
- type: nauc_mrr_at_10_max
value: 31.594271897882237
- type: nauc_mrr_at_10_std
value: 0.8807168907688873
- type: nauc_mrr_at_1_diff1
value: 51.35163098909763
- type: nauc_mrr_at_1_max
value: 31.99084441327899
- type: nauc_mrr_at_1_std
value: -2.688594880742662
- type: nauc_mrr_at_20_diff1
value: 46.18178546174727
- type: nauc_mrr_at_20_max
value: 31.639111674119448
- type: nauc_mrr_at_20_std
value: 0.9855008641374622
- type: nauc_mrr_at_3_diff1
value: 46.307484835305864
- type: nauc_mrr_at_3_max
value: 31.35563850804847
- type: nauc_mrr_at_3_std
value: -0.3419536587707561
- type: nauc_mrr_at_5_diff1
value: 46.17646418781234
- type: nauc_mrr_at_5_max
value: 31.313474270239833
- type: nauc_mrr_at_5_std
value: -0.08656550526568331
- type: nauc_ndcg_at_1000_diff1
value: 46.12095795101613
- type: nauc_ndcg_at_1000_max
value: 31.989083597726314
- type: nauc_ndcg_at_1000_std
value: 3.2965704707660763
- type: nauc_ndcg_at_100_diff1
value: 46.05376249841318
- type: nauc_ndcg_at_100_max
value: 32.39195988574972
- type: nauc_ndcg_at_100_std
value: 4.518018135593347
- type: nauc_ndcg_at_10_diff1
value: 46.133631183744875
- type: nauc_ndcg_at_10_max
value: 31.45358876172339
- type: nauc_ndcg_at_10_std
value: 3.4254370918871055
- type: nauc_ndcg_at_1_diff1
value: 51.35163098909763
- type: nauc_ndcg_at_1_max
value: 31.99084441327899
- type: nauc_ndcg_at_1_std
value: -2.688594880742662
- type: nauc_ndcg_at_20_diff1
value: 45.94584949766954
- type: nauc_ndcg_at_20_max
value: 31.689777515111295
- type: nauc_ndcg_at_20_std
value: 4.189082428922442
- type: nauc_ndcg_at_3_diff1
value: 46.5057835389752
- type: nauc_ndcg_at_3_max
value: 30.941407592082047
- type: nauc_ndcg_at_3_std
value: -0.042473944857831535
- type: nauc_ndcg_at_5_diff1
value: 46.369027395136136
- type: nauc_ndcg_at_5_max
value: 31.057841776505352
- type: nauc_ndcg_at_5_std
value: 0.6878993420489522
- type: nauc_precision_at_1000_diff1
value: -17.30759714093202
- type: nauc_precision_at_1000_max
value: -4.441155558458858
- type: nauc_precision_at_1000_std
value: 1.5537300718220326
- type: nauc_precision_at_100_diff1
value: -7.18920438222021
- type: nauc_precision_at_100_max
value: 8.017878121399253
- type: nauc_precision_at_100_std
value: 11.357132919349102
- type: nauc_precision_at_10_diff1
value: 15.202451884794076
- type: nauc_precision_at_10_max
value: 19.077295902881417
- type: nauc_precision_at_10_std
value: 9.885526867355805
- type: nauc_precision_at_1_diff1
value: 51.35163098909763
- type: nauc_precision_at_1_max
value: 31.99084441327899
- type: nauc_precision_at_1_std
value: -2.688594880742662
- type: nauc_precision_at_20_diff1
value: 6.827461091494899
- type: nauc_precision_at_20_max
value: 15.27268633497114
- type: nauc_precision_at_20_std
value: 11.515826649647384
- type: nauc_precision_at_3_diff1
value: 31.043021807472027
- type: nauc_precision_at_3_max
value: 26.22457157531548
- type: nauc_precision_at_3_std
value: 1.788215968301994
- type: nauc_precision_at_5_diff1
value: 25.030185818513235
- type: nauc_precision_at_5_max
value: 23.680129160901537
- type: nauc_precision_at_5_std
value: 4.303018899688115
- type: nauc_recall_at_1000_diff1
value: 28.68826642607512
- type: nauc_recall_at_1000_max
value: 42.33849804103852
- type: nauc_recall_at_1000_std
value: 42.67413575876864
- type: nauc_recall_at_100_diff1
value: 36.51494878715
- type: nauc_recall_at_100_max
value: 37.4764995034434
- type: nauc_recall_at_100_std
value: 28.295671266661017
- type: nauc_recall_at_10_diff1
value: 39.416721111463524
- type: nauc_recall_at_10_max
value: 29.95985608454179
- type: nauc_recall_at_10_std
value: 12.423335839786201
- type: nauc_recall_at_1_diff1
value: 53.57918993108331
- type: nauc_recall_at_1_max
value: 31.392632653740993
- type: nauc_recall_at_1_std
value: -2.857306170463933
- type: nauc_recall_at_20_diff1
value: 38.228803480194046
- type: nauc_recall_at_20_max
value: 30.87261362975955
- type: nauc_recall_at_20_std
value: 16.977113091834095
- type: nauc_recall_at_3_diff1
value: 43.154348566653155
- type: nauc_recall_at_3_max
value: 29.54536633744803
- type: nauc_recall_at_3_std
value: 2.02842672250621
- type: nauc_recall_at_5_diff1
value: 41.00436246072242
- type: nauc_recall_at_5_max
value: 29.413569555348023
- type: nauc_recall_at_5_std
value: 3.845214021958289
- type: ndcg_at_1
value: 34.701
- type: ndcg_at_10
value: 46.54
- type: ndcg_at_100
value: 51.754999999999995
- type: ndcg_at_1000
value: 53.71
- type: ndcg_at_20
value: 48.679
- type: ndcg_at_3
value: 40.892
- type: ndcg_at_5
value: 43.595
- type: precision_at_1
value: 34.701
- type: precision_at_10
value: 8.004
- type: precision_at_100
value: 1.185
- type: precision_at_1000
value: 0.145
- type: precision_at_20
value: 4.632
- type: precision_at_3
value: 18.719
- type: precision_at_5
value: 13.245999999999999
- type: recall_at_1
value: 29.953999999999997
- type: recall_at_10
value: 60.246
- type: recall_at_100
value: 82.128
- type: recall_at_1000
value: 95.622
- type: recall_at_20
value: 67.756
- type: recall_at_3
value: 45.096000000000004
- type: recall_at_5
value: 51.9
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: mteb/cqadupstack-webmasters
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: main_score
value: 44.718999999999994
- type: map_at_1
value: 28.383999999999997
- type: map_at_10
value: 38.422
- type: map_at_100
value: 40.058
- type: map_at_1000
value: 40.276
- type: map_at_20
value: 39.301
- type: map_at_3
value: 35.205
- type: map_at_5
value: 36.803999999999995
- type: mrr_at_1
value: 33.59683794466403
- type: mrr_at_10
value: 42.837536859275986
- type: mrr_at_100
value: 43.7501703455481
- type: mrr_at_1000
value: 43.79258407771123
- type: mrr_at_20
value: 43.36044710445095
- type: mrr_at_3
value: 40.15151515151516
- type: mrr_at_5
value: 41.74242424242425
- type: nauc_map_at_1000_diff1
value: 47.934826596875304
- type: nauc_map_at_1000_max
value: 32.39759438116062
- type: nauc_map_at_1000_std
value: 0.9489007346763054
- type: nauc_map_at_100_diff1
value: 47.94844822157888
- type: nauc_map_at_100_max
value: 32.51485845519537
- type: nauc_map_at_100_std
value: 0.8094339925545622
- type: nauc_map_at_10_diff1
value: 48.251456404874645
- type: nauc_map_at_10_max
value: 31.412906399154245
- type: nauc_map_at_10_std
value: -0.7024825737369933
- type: nauc_map_at_1_diff1
value: 55.81906101970174
- type: nauc_map_at_1_max
value: 31.811715334193796
- type: nauc_map_at_1_std
value: -6.17056859281584
- type: nauc_map_at_20_diff1
value: 47.80902650237369
- type: nauc_map_at_20_max
value: 32.22465403023091
- type: nauc_map_at_20_std
value: 0.20706526946705656
- type: nauc_map_at_3_diff1
value: 49.97333984346632
- type: nauc_map_at_3_max
value: 31.58195498640799
- type: nauc_map_at_3_std
value: -2.577539707727459
- type: nauc_map_at_5_diff1
value: 49.40005767350608
- type: nauc_map_at_5_max
value: 30.998435600377434
- type: nauc_map_at_5_std
value: -2.1231771618690307
- type: nauc_mrr_at_1000_diff1
value: 46.86811371969663
- type: nauc_mrr_at_1000_max
value: 31.25147138171024
- type: nauc_mrr_at_1000_std
value: 1.9954422477585918
- type: nauc_mrr_at_100_diff1
value: 46.855870345882195
- type: nauc_mrr_at_100_max
value: 31.263524035665966
- type: nauc_mrr_at_100_std
value: 2.0160751193806568
- type: nauc_mrr_at_10_diff1
value: 46.93294772825783
- type: nauc_mrr_at_10_max
value: 30.927002048701663
- type: nauc_mrr_at_10_std
value: 1.6538220080908224
- type: nauc_mrr_at_1_diff1
value: 52.416386548395664
- type: nauc_mrr_at_1_max
value: 32.28582003787206
- type: nauc_mrr_at_1_std
value: -2.154991145714492
- type: nauc_mrr_at_20_diff1
value: 46.71796185319694
- type: nauc_mrr_at_20_max
value: 31.16219902794994
- type: nauc_mrr_at_20_std
value: 1.8590646572728409
- type: nauc_mrr_at_3_diff1
value: 47.697100317669914
- type: nauc_mrr_at_3_max
value: 30.821806030159383
- type: nauc_mrr_at_3_std
value: 1.1927626358099177
- type: nauc_mrr_at_5_diff1
value: 47.065272061365704
- type: nauc_mrr_at_5_max
value: 30.299230962805023
- type: nauc_mrr_at_5_std
value: 1.3225842862629529
- type: nauc_ndcg_at_1000_diff1
value: 45.20612583136058
- type: nauc_ndcg_at_1000_max
value: 33.51931869947315
- type: nauc_ndcg_at_1000_std
value: 4.923707509620363
- type: nauc_ndcg_at_100_diff1
value: 44.76206243393775
- type: nauc_ndcg_at_100_max
value: 33.57771606755598
- type: nauc_ndcg_at_100_std
value: 5.30915563331338
- type: nauc_ndcg_at_10_diff1
value: 45.12714032463827
- type: nauc_ndcg_at_10_max
value: 30.351909495610492
- type: nauc_ndcg_at_10_std
value: 2.3972947289996873
- type: nauc_ndcg_at_1_diff1
value: 52.416386548395664
- type: nauc_ndcg_at_1_max
value: 32.28582003787206
- type: nauc_ndcg_at_1_std
value: -2.154991145714492
- type: nauc_ndcg_at_20_diff1
value: 44.20281844000005
- type: nauc_ndcg_at_20_max
value: 32.14112739396226
- type: nauc_ndcg_at_20_std
value: 3.3971385462591916
- type: nauc_ndcg_at_3_diff1
value: 47.0633767031858
- type: nauc_ndcg_at_3_max
value: 31.032896053733435
- type: nauc_ndcg_at_3_std
value: 0.6827544906310201
- type: nauc_ndcg_at_5_diff1
value: 46.735352294106484
- type: nauc_ndcg_at_5_max
value: 29.784992270528544
- type: nauc_ndcg_at_5_std
value: 0.8685943819516141
- type: nauc_precision_at_1000_diff1
value: -12.223330179860852
- type: nauc_precision_at_1000_max
value: -9.266492213777273
- type: nauc_precision_at_1000_std
value: 19.0569899587788
- type: nauc_precision_at_100_diff1
value: -5.803751085072067
- type: nauc_precision_at_100_max
value: 3.448932057044294
- type: nauc_precision_at_100_std
value: 23.470863527030627
- type: nauc_precision_at_10_diff1
value: 8.887357341361907
- type: nauc_precision_at_10_max
value: 18.67165390928126
- type: nauc_precision_at_10_std
value: 19.158543337955404
- type: nauc_precision_at_1_diff1
value: 52.416386548395664
- type: nauc_precision_at_1_max
value: 32.28582003787206
- type: nauc_precision_at_1_std
value: -2.154991145714492
- type: nauc_precision_at_20_diff1
value: 0.942496138409553
- type: nauc_precision_at_20_max
value: 18.86957127610774
- type: nauc_precision_at_20_std
value: 24.075503903246496
- type: nauc_precision_at_3_diff1
value: 28.15363877307106
- type: nauc_precision_at_3_max
value: 27.064928137991824
- type: nauc_precision_at_3_std
value: 8.632807104504753
- type: nauc_precision_at_5_diff1
value: 20.805862332497973
- type: nauc_precision_at_5_max
value: 21.420201475758404
- type: nauc_precision_at_5_std
value: 12.380239645425714
- type: nauc_recall_at_1000_diff1
value: 18.478341468055547
- type: nauc_recall_at_1000_max
value: 56.293560115074506
- type: nauc_recall_at_1000_std
value: 64.31607185065428
- type: nauc_recall_at_100_diff1
value: 26.737267337771886
- type: nauc_recall_at_100_max
value: 38.011889141496326
- type: nauc_recall_at_100_std
value: 30.44904690114732
- type: nauc_recall_at_10_diff1
value: 35.22772732735716
- type: nauc_recall_at_10_max
value: 26.000054115159486
- type: nauc_recall_at_10_std
value: 5.174264254271206
- type: nauc_recall_at_1_diff1
value: 55.81906101970174
- type: nauc_recall_at_1_max
value: 31.811715334193796
- type: nauc_recall_at_1_std
value: -6.17056859281584
- type: nauc_recall_at_20_diff1
value: 30.48493302415641
- type: nauc_recall_at_20_max
value: 31.05487040370753
- type: nauc_recall_at_20_std
value: 10.319948318834136
- type: nauc_recall_at_3_diff1
value: 43.12289512340243
- type: nauc_recall_at_3_max
value: 28.176279771026135
- type: nauc_recall_at_3_std
value: -0.1775154523381921
- type: nauc_recall_at_5_diff1
value: 40.9934933741234
- type: nauc_recall_at_5_max
value: 25.569156290584733
- type: nauc_recall_at_5_std
value: 0.21166696686855038
- type: ndcg_at_1
value: 33.597
- type: ndcg_at_10
value: 44.718999999999994
- type: ndcg_at_100
value: 50.324000000000005
- type: ndcg_at_1000
value: 52.468
- type: ndcg_at_20
value: 46.822
- type: ndcg_at_3
value: 39.558
- type: ndcg_at_5
value: 41.827999999999996
- type: precision_at_1
value: 33.597
- type: precision_at_10
value: 8.735
- type: precision_at_100
value: 1.6420000000000001
- type: precision_at_1000
value: 0.246
- type: precision_at_20
value: 5.375
- type: precision_at_3
value: 18.511
- type: precision_at_5
value: 13.399
- type: recall_at_1
value: 28.383999999999997
- type: recall_at_10
value: 56.425000000000004
- type: recall_at_100
value: 82.01899999999999
- type: recall_at_1000
value: 95.285
- type: recall_at_20
value: 64.615
- type: recall_at_3
value: 42.171
- type: recall_at_5
value: 48.296
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: mteb/cqadupstack-wordpress
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: main_score
value: 38.269999999999996
- type: map_at_1
value: 25.324999999999996
- type: map_at_10
value: 33.263
- type: map_at_100
value: 34.304
- type: map_at_1000
value: 34.394000000000005
- type: map_at_20
value: 33.827
- type: map_at_3
value: 30.259999999999998
- type: map_at_5
value: 31.832
- type: mrr_at_1
value: 27.171903881700555
- type: mrr_at_10
value: 35.334991051257234
- type: mrr_at_100
value: 36.251283465952355
- type: mrr_at_1000
value: 36.316236092511055
- type: mrr_at_20
value: 35.87141909945257
- type: mrr_at_3
value: 32.71719038817007
- type: mrr_at_5
value: 34.19593345656194
- type: nauc_map_at_1000_diff1
value: 39.614836211522714
- type: nauc_map_at_1000_max
value: 22.019768626310192
- type: nauc_map_at_1000_std
value: -1.5238708712112499
- type: nauc_map_at_100_diff1
value: 39.63008548572307
- type: nauc_map_at_100_max
value: 22.044756063752345
- type: nauc_map_at_100_std
value: -1.4869190221494792
- type: nauc_map_at_10_diff1
value: 39.73025012395569
- type: nauc_map_at_10_max
value: 22.117710178892107
- type: nauc_map_at_10_std
value: -2.5129984871932973
- type: nauc_map_at_1_diff1
value: 45.015617718902654
- type: nauc_map_at_1_max
value: 19.313800263189638
- type: nauc_map_at_1_std
value: -4.763931386681675
- type: nauc_map_at_20_diff1
value: 39.53678019013766
- type: nauc_map_at_20_max
value: 21.880316719428258
- type: nauc_map_at_20_std
value: -1.882003994523355
- type: nauc_map_at_3_diff1
value: 40.37307665298228
- type: nauc_map_at_3_max
value: 20.851976075322533
- type: nauc_map_at_3_std
value: -2.429569082966531
- type: nauc_map_at_5_diff1
value: 39.763015635086
- type: nauc_map_at_5_max
value: 22.010102196900725
- type: nauc_map_at_5_std
value: -2.654896415670943
- type: nauc_mrr_at_1000_diff1
value: 39.74071733680025
- type: nauc_mrr_at_1000_max
value: 21.67309640681989
- type: nauc_mrr_at_1000_std
value: -1.4003373135477462
- type: nauc_mrr_at_100_diff1
value: 39.730614151966485
- type: nauc_mrr_at_100_max
value: 21.678390048971767
- type: nauc_mrr_at_100_std
value: -1.3655362623563931
- type: nauc_mrr_at_10_diff1
value: 39.7900031013241
- type: nauc_mrr_at_10_max
value: 21.73643491725051
- type: nauc_mrr_at_10_std
value: -2.1175389838696312
- type: nauc_mrr_at_1_diff1
value: 46.165736140679776
- type: nauc_mrr_at_1_max
value: 20.071083446822147
- type: nauc_mrr_at_1_std
value: -5.018909100858311
- type: nauc_mrr_at_20_diff1
value: 39.6371295762885
- type: nauc_mrr_at_20_max
value: 21.659557440270973
- type: nauc_mrr_at_20_std
value: -1.4909603958341686
- type: nauc_mrr_at_3_diff1
value: 40.351150322758876
- type: nauc_mrr_at_3_max
value: 20.83706249041544
- type: nauc_mrr_at_3_std
value: -1.956027373253151
- type: nauc_mrr_at_5_diff1
value: 39.57759107791911
- type: nauc_mrr_at_5_max
value: 21.79552045204151
- type: nauc_mrr_at_5_std
value: -2.1507013120951126
- type: nauc_ndcg_at_1000_diff1
value: 37.717619356839016
- type: nauc_ndcg_at_1000_max
value: 22.545375504379805
- type: nauc_ndcg_at_1000_std
value: 1.682348628141016
- type: nauc_ndcg_at_100_diff1
value: 37.656027803682626
- type: nauc_ndcg_at_100_max
value: 22.49278246383637
- type: nauc_ndcg_at_100_std
value: 2.6818118152357773
- type: nauc_ndcg_at_10_diff1
value: 37.834954205539766
- type: nauc_ndcg_at_10_max
value: 22.655839885558443
- type: nauc_ndcg_at_10_std
value: -1.97159619786231
- type: nauc_ndcg_at_1_diff1
value: 46.165736140679776
- type: nauc_ndcg_at_1_max
value: 20.071083446822147
- type: nauc_ndcg_at_1_std
value: -5.018909100858311
- type: nauc_ndcg_at_20_diff1
value: 37.171914857454304
- type: nauc_ndcg_at_20_max
value: 21.858904801745897
- type: nauc_ndcg_at_20_std
value: 0.3809854859496657
- type: nauc_ndcg_at_3_diff1
value: 38.4460623883955
- type: nauc_ndcg_at_3_max
value: 20.95244159463402
- type: nauc_ndcg_at_3_std
value: -1.2685011660086651
- type: nauc_ndcg_at_5_diff1
value: 37.48831054573054
- type: nauc_ndcg_at_5_max
value: 22.625921624640526
- type: nauc_ndcg_at_5_std
value: -2.049221092724925
- type: nauc_precision_at_1000_diff1
value: -19.120500628263994
- type: nauc_precision_at_1000_max
value: -6.650707109047473
- type: nauc_precision_at_1000_std
value: 15.71193179253002
- type: nauc_precision_at_100_diff1
value: 6.254606806876069
- type: nauc_precision_at_100_max
value: 14.601826922181823
- type: nauc_precision_at_100_std
value: 28.38299592246453
- type: nauc_precision_at_10_diff1
value: 22.978614338670816
- type: nauc_precision_at_10_max
value: 23.04146766323557
- type: nauc_precision_at_10_std
value: 6.226264308612577
- type: nauc_precision_at_1_diff1
value: 46.165736140679776
- type: nauc_precision_at_1_max
value: 20.071083446822147
- type: nauc_precision_at_1_std
value: -5.018909100858311
- type: nauc_precision_at_20_diff1
value: 17.681032853225602
- type: nauc_precision_at_20_max
value: 18.66680304585122
- type: nauc_precision_at_20_std
value: 15.34896796713905
- type: nauc_precision_at_3_diff1
value: 31.359396694559194
- type: nauc_precision_at_3_max
value: 22.279263308973274
- type: nauc_precision_at_3_std
value: 3.6302537979529035
- type: nauc_precision_at_5_diff1
value: 26.32257879892933
- type: nauc_precision_at_5_max
value: 25.402524493181026
- type: nauc_precision_at_5_std
value: 4.731450603747359
- type: nauc_recall_at_1000_diff1
value: 23.562925244967875
- type: nauc_recall_at_1000_max
value: 30.737399333586797
- type: nauc_recall_at_1000_std
value: 34.19418935008663
- type: nauc_recall_at_100_diff1
value: 28.703574970574824
- type: nauc_recall_at_100_max
value: 22.448663600170278
- type: nauc_recall_at_100_std
value: 24.53297349042035
- type: nauc_recall_at_10_diff1
value: 31.73603907811882
- type: nauc_recall_at_10_max
value: 23.453183748640765
- type: nauc_recall_at_10_std
value: -1.8279054407176274
- type: nauc_recall_at_1_diff1
value: 45.015617718902654
- type: nauc_recall_at_1_max
value: 19.313800263189638
- type: nauc_recall_at_1_std
value: -4.763931386681675
- type: nauc_recall_at_20_diff1
value: 28.74169081866096
- type: nauc_recall_at_20_max
value: 20.035509169577324
- type: nauc_recall_at_20_std
value: 7.371615811227748
- type: nauc_recall_at_3_diff1
value: 34.09890157333362
- type: nauc_recall_at_3_max
value: 20.46565842748346
- type: nauc_recall_at_3_std
value: -0.4337283067447526
- type: nauc_recall_at_5_diff1
value: 30.974580787842402
- type: nauc_recall_at_5_max
value: 23.76379349487105
- type: nauc_recall_at_5_std
value: -1.8407515927979428
- type: ndcg_at_1
value: 27.172
- type: ndcg_at_10
value: 38.269999999999996
- type: ndcg_at_100
value: 43.338
- type: ndcg_at_1000
value: 45.594
- type: ndcg_at_20
value: 40.256
- type: ndcg_at_3
value: 32.673
- type: ndcg_at_5
value: 35.224
- type: precision_at_1
value: 27.172
- type: precision_at_10
value: 6.063000000000001
- type: precision_at_100
value: 0.9259999999999999
- type: precision_at_1000
value: 0.123
- type: precision_at_20
value: 3.5029999999999997
- type: precision_at_3
value: 13.74
- type: precision_at_5
value: 9.797
- type: recall_at_1
value: 25.324999999999996
- type: recall_at_10
value: 51.634
- type: recall_at_100
value: 74.687
- type: recall_at_1000
value: 91.412
- type: recall_at_20
value: 59.207
- type: recall_at_3
value: 36.678
- type: recall_at_5
value: 42.742999999999995
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: main_score
value: 36.853
- type: map_at_1
value: 15.371000000000002
- type: map_at_10
value: 27.122
- type: map_at_100
value: 29.226000000000003
- type: map_at_1000
value: 29.409999999999997
- type: map_at_20
value: 28.274
- type: map_at_3
value: 22.431
- type: map_at_5
value: 24.877
- type: mrr_at_1
value: 34.13680781758958
- type: mrr_at_10
value: 47.265911793599145
- type: mrr_at_100
value: 48.028369995763846
- type: mrr_at_1000
value: 48.05317022537804
- type: mrr_at_20
value: 47.75785292259516
- type: mrr_at_3
value: 43.887079261672156
- type: mrr_at_5
value: 45.906623235613544
- type: nauc_map_at_1000_diff1
value: 24.949211292921547
- type: nauc_map_at_1000_max
value: 38.69844483304584
- type: nauc_map_at_1000_std
value: 18.336359440844753
- type: nauc_map_at_100_diff1
value: 24.8951732982492
- type: nauc_map_at_100_max
value: 38.65049158594052
- type: nauc_map_at_100_std
value: 18.28935278388095
- type: nauc_map_at_10_diff1
value: 24.606032216798273
- type: nauc_map_at_10_max
value: 38.00608351559887
- type: nauc_map_at_10_std
value: 16.61261615173358
- type: nauc_map_at_1_diff1
value: 30.83614944448221
- type: nauc_map_at_1_max
value: 33.757528532809
- type: nauc_map_at_1_std
value: 8.880622713261126
- type: nauc_map_at_20_diff1
value: 24.75491310922017
- type: nauc_map_at_20_max
value: 38.353679076398834
- type: nauc_map_at_20_std
value: 17.58637493443171
- type: nauc_map_at_3_diff1
value: 25.563085273287083
- type: nauc_map_at_3_max
value: 35.14515679047155
- type: nauc_map_at_3_std
value: 11.75594869817732
- type: nauc_map_at_5_diff1
value: 24.815807517691614
- type: nauc_map_at_5_max
value: 36.25905426665983
- type: nauc_map_at_5_std
value: 14.516391726180697
- type: nauc_mrr_at_1000_diff1
value: 27.948233427121274
- type: nauc_mrr_at_1000_max
value: 37.5893640945859
- type: nauc_mrr_at_1000_std
value: 19.588442449629763
- type: nauc_mrr_at_100_diff1
value: 27.947962345854037
- type: nauc_mrr_at_100_max
value: 37.60375479481945
- type: nauc_mrr_at_100_std
value: 19.614791576283793
- type: nauc_mrr_at_10_diff1
value: 27.882311310262136
- type: nauc_mrr_at_10_max
value: 37.58580968074054
- type: nauc_mrr_at_10_std
value: 19.49875186170201
- type: nauc_mrr_at_1_diff1
value: 28.017413073648477
- type: nauc_mrr_at_1_max
value: 32.87710191514022
- type: nauc_mrr_at_1_std
value: 14.04889142608459
- type: nauc_mrr_at_20_diff1
value: 27.89129925771968
- type: nauc_mrr_at_20_max
value: 37.6142863106945
- type: nauc_mrr_at_20_std
value: 19.645390143394163
- type: nauc_mrr_at_3_diff1
value: 27.99609559690795
- type: nauc_mrr_at_3_max
value: 36.87362332456197
- type: nauc_mrr_at_3_std
value: 18.598416821915333
- type: nauc_mrr_at_5_diff1
value: 27.68306089976716
- type: nauc_mrr_at_5_max
value: 37.12264485659723
- type: nauc_mrr_at_5_std
value: 19.18875305730564
- type: nauc_ndcg_at_1000_diff1
value: 25.736779186453777
- type: nauc_ndcg_at_1000_max
value: 41.93281139456004
- type: nauc_ndcg_at_1000_std
value: 25.179038422659993
- type: nauc_ndcg_at_100_diff1
value: 25.144796623848322
- type: nauc_ndcg_at_100_max
value: 41.72820916876173
- type: nauc_ndcg_at_100_std
value: 25.12851686850754
- type: nauc_ndcg_at_10_diff1
value: 24.321249191226652
- type: nauc_ndcg_at_10_max
value: 40.23711916935706
- type: nauc_ndcg_at_10_std
value: 20.89060972334557
- type: nauc_ndcg_at_1_diff1
value: 28.017413073648477
- type: nauc_ndcg_at_1_max
value: 32.87710191514022
- type: nauc_ndcg_at_1_std
value: 14.04889142608459
- type: nauc_ndcg_at_20_diff1
value: 24.5090484877482
- type: nauc_ndcg_at_20_max
value: 40.752854032983606
- type: nauc_ndcg_at_20_std
value: 22.70331074781384
- type: nauc_ndcg_at_3_diff1
value: 25.13499057756147
- type: nauc_ndcg_at_3_max
value: 35.8325682137567
- type: nauc_ndcg_at_3_std
value: 15.23768392706637
- type: nauc_ndcg_at_5_diff1
value: 24.614105695451116
- type: nauc_ndcg_at_5_max
value: 37.68089587624492
- type: nauc_ndcg_at_5_std
value: 17.946406099261708
- type: nauc_precision_at_1000_diff1
value: -2.022340544774227
- type: nauc_precision_at_1000_max
value: 6.070578645067797
- type: nauc_precision_at_1000_std
value: 22.15132728777549
- type: nauc_precision_at_100_diff1
value: 4.544144474504255
- type: nauc_precision_at_100_max
value: 19.780392159848574
- type: nauc_precision_at_100_std
value: 31.107111186002438
- type: nauc_precision_at_10_diff1
value: 10.107015022955848
- type: nauc_precision_at_10_max
value: 30.779709099060465
- type: nauc_precision_at_10_std
value: 27.324148451668602
- type: nauc_precision_at_1_diff1
value: 28.017413073648477
- type: nauc_precision_at_1_max
value: 32.87710191514022
- type: nauc_precision_at_1_std
value: 14.04889142608459
- type: nauc_precision_at_20_diff1
value: 8.270881053079405
- type: nauc_precision_at_20_max
value: 27.26753946078481
- type: nauc_precision_at_20_std
value: 29.156725822074204
- type: nauc_precision_at_3_diff1
value: 17.82468940497632
- type: nauc_precision_at_3_max
value: 31.490021174215155
- type: nauc_precision_at_3_std
value: 18.73818985054394
- type: nauc_precision_at_5_diff1
value: 13.24803141673961
- type: nauc_precision_at_5_max
value: 29.94926240784298
- type: nauc_precision_at_5_std
value: 23.2940906142919
- type: nauc_recall_at_1000_diff1
value: 19.09850333580471
- type: nauc_recall_at_1000_max
value: 46.026306142840596
- type: nauc_recall_at_1000_std
value: 46.50391519568263
- type: nauc_recall_at_100_diff1
value: 16.739384224869738
- type: nauc_recall_at_100_max
value: 40.68987136431252
- type: nauc_recall_at_100_std
value: 36.01609750485591
- type: nauc_recall_at_10_diff1
value: 17.51796617221814
- type: nauc_recall_at_10_max
value: 39.47453129444401
- type: nauc_recall_at_10_std
value: 23.79239002974899
- type: nauc_recall_at_1_diff1
value: 30.83614944448221
- type: nauc_recall_at_1_max
value: 33.757528532809
- type: nauc_recall_at_1_std
value: 8.880622713261126
- type: nauc_recall_at_20_diff1
value: 16.978668307251652
- type: nauc_recall_at_20_max
value: 39.09115357303713
- type: nauc_recall_at_20_std
value: 27.278668534187524
- type: nauc_recall_at_3_diff1
value: 22.55937738994021
- type: nauc_recall_at_3_max
value: 36.25055459395638
- type: nauc_recall_at_3_std
value: 14.828905168761247
- type: nauc_recall_at_5_diff1
value: 19.32656748627199
- type: nauc_recall_at_5_max
value: 36.28836228620816
- type: nauc_recall_at_5_std
value: 19.264352933914278
- type: ndcg_at_1
value: 34.137
- type: ndcg_at_10
value: 36.853
- type: ndcg_at_100
value: 44.279
- type: ndcg_at_1000
value: 47.336
- type: ndcg_at_20
value: 39.815
- type: ndcg_at_3
value: 30.253999999999998
- type: ndcg_at_5
value: 32.649
- type: precision_at_1
value: 34.137
- type: precision_at_10
value: 11.655
- type: precision_at_100
value: 1.9619999999999997
- type: precision_at_1000
value: 0.254
- type: precision_at_20
value: 7.1209999999999996
- type: precision_at_3
value: 22.823
- type: precision_at_5
value: 17.655
- type: recall_at_1
value: 15.371000000000002
- type: recall_at_10
value: 43.718
- type: recall_at_100
value: 68.81
- type: recall_at_1000
value: 85.69600000000001
- type: recall_at_20
value: 51.94
- type: recall_at_3
value: 27.694000000000003
- type: recall_at_5
value: 34.469
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: main_score
value: 45.553
- type: map_at_1
value: 9.168999999999999
- type: map_at_10
value: 22.154
- type: map_at_100
value: 32.174
- type: map_at_1000
value: 33.974
- type: map_at_20
value: 25.899
- type: map_at_3
value: 15.275
- type: map_at_5
value: 18.291
- type: mrr_at_1
value: 70.75
- type: mrr_at_10
value: 78.39662698412697
- type: mrr_at_100
value: 78.56221458977012
- type: mrr_at_1000
value: 78.56669970642338
- type: mrr_at_20
value: 78.49688805346696
- type: mrr_at_3
value: 76.33333333333333
- type: mrr_at_5
value: 77.70833333333333
- type: nauc_map_at_1000_diff1
value: 18.465085922071346
- type: nauc_map_at_1000_max
value: 24.29804638788498
- type: nauc_map_at_1000_std
value: 22.380463943423514
- type: nauc_map_at_100_diff1
value: 19.37585410674523
- type: nauc_map_at_100_max
value: 22.56424042509462
- type: nauc_map_at_100_std
value: 19.672237275984426
- type: nauc_map_at_10_diff1
value: 23.597788166305577
- type: nauc_map_at_10_max
value: 9.157316105122925
- type: nauc_map_at_10_std
value: -3.8881247055786807
- type: nauc_map_at_1_diff1
value: 43.96699602275052
- type: nauc_map_at_1_max
value: -0.7577088440873263
- type: nauc_map_at_1_std
value: -17.732463891968404
- type: nauc_map_at_20_diff1
value: 22.326759054850097
- type: nauc_map_at_20_max
value: 14.879191412167703
- type: nauc_map_at_20_std
value: 5.405751236575241
- type: nauc_map_at_3_diff1
value: 28.73583545428074
- type: nauc_map_at_3_max
value: 1.5986597211018239
- type: nauc_map_at_3_std
value: -16.512455883681515
- type: nauc_map_at_5_diff1
value: 25.401810959155057
- type: nauc_map_at_5_max
value: 4.418875376978587
- type: nauc_map_at_5_std
value: -12.296750992013052
- type: nauc_mrr_at_1000_diff1
value: 51.228801807498584
- type: nauc_mrr_at_1000_max
value: 61.040998883279585
- type: nauc_mrr_at_1000_std
value: 40.93983887257123
- type: nauc_mrr_at_100_diff1
value: 51.23715338435314
- type: nauc_mrr_at_100_max
value: 61.03971408781317
- type: nauc_mrr_at_100_std
value: 40.91796923590573
- type: nauc_mrr_at_10_diff1
value: 51.1214868552331
- type: nauc_mrr_at_10_max
value: 61.03069045590881
- type: nauc_mrr_at_10_std
value: 40.661621199704264
- type: nauc_mrr_at_1_diff1
value: 50.84660003035892
- type: nauc_mrr_at_1_max
value: 60.692091499960895
- type: nauc_mrr_at_1_std
value: 42.126228731502955
- type: nauc_mrr_at_20_diff1
value: 51.0402624284872
- type: nauc_mrr_at_20_max
value: 60.94577844338166
- type: nauc_mrr_at_20_std
value: 40.89505950503613
- type: nauc_mrr_at_3_diff1
value: 51.771113665996516
- type: nauc_mrr_at_3_max
value: 61.65264793077224
- type: nauc_mrr_at_3_std
value: 41.75781827057092
- type: nauc_mrr_at_5_diff1
value: 51.0656793772882
- type: nauc_mrr_at_5_max
value: 61.08042065139715
- type: nauc_mrr_at_5_std
value: 41.11203271084835
- type: nauc_ndcg_at_1000_diff1
value: 22.347978262245107
- type: nauc_ndcg_at_1000_max
value: 36.56458763955002
- type: nauc_ndcg_at_1000_std
value: 35.99616144258822
- type: nauc_ndcg_at_100_diff1
value: 23.1120990977162
- type: nauc_ndcg_at_100_max
value: 30.79663306311657
- type: nauc_ndcg_at_100_std
value: 27.387572106784297
- type: nauc_ndcg_at_10_diff1
value: 23.329746066899656
- type: nauc_ndcg_at_10_max
value: 28.69246947084685
- type: nauc_ndcg_at_10_std
value: 21.457736188325345
- type: nauc_ndcg_at_1_diff1
value: 39.99399153456974
- type: nauc_ndcg_at_1_max
value: 38.12447856470389
- type: nauc_ndcg_at_1_std
value: 27.768869260384676
- type: nauc_ndcg_at_20_diff1
value: 24.945374175339907
- type: nauc_ndcg_at_20_max
value: 27.67836982165295
- type: nauc_ndcg_at_20_std
value: 19.7933631060578
- type: nauc_ndcg_at_3_diff1
value: 26.063492354398527
- type: nauc_ndcg_at_3_max
value: 33.06541959550656
- type: nauc_ndcg_at_3_std
value: 23.278902797288726
- type: nauc_ndcg_at_5_diff1
value: 22.521596060750035
- type: nauc_ndcg_at_5_max
value: 31.210005673730784
- type: nauc_ndcg_at_5_std
value: 22.893106456317927
- type: nauc_precision_at_1000_diff1
value: -19.845356495096006
- type: nauc_precision_at_1000_max
value: 4.163819381816099
- type: nauc_precision_at_1000_std
value: 7.612952884590339
- type: nauc_precision_at_100_diff1
value: -8.2679285153361
- type: nauc_precision_at_100_max
value: 29.78018175573565
- type: nauc_precision_at_100_std
value: 41.07244463956215
- type: nauc_precision_at_10_diff1
value: -3.2451428407349057
- type: nauc_precision_at_10_max
value: 36.92563008274906
- type: nauc_precision_at_10_std
value: 45.06962043489777
- type: nauc_precision_at_1_diff1
value: 50.84660003035892
- type: nauc_precision_at_1_max
value: 60.692091499960895
- type: nauc_precision_at_1_std
value: 42.126228731502955
- type: nauc_precision_at_20_diff1
value: -3.432279149061878
- type: nauc_precision_at_20_max
value: 37.013592483974875
- type: nauc_precision_at_20_std
value: 46.47324739428665
- type: nauc_precision_at_3_diff1
value: 7.28495481051025
- type: nauc_precision_at_3_max
value: 38.66372411741402
- type: nauc_precision_at_3_std
value: 35.23163993723955
- type: nauc_precision_at_5_diff1
value: -0.16540230063716202
- type: nauc_precision_at_5_max
value: 37.322494255721715
- type: nauc_precision_at_5_std
value: 39.666653561269754
- type: nauc_recall_at_1000_diff1
value: 11.388326469283681
- type: nauc_recall_at_1000_max
value: 32.698146308591674
- type: nauc_recall_at_1000_std
value: 49.48830488070777
- type: nauc_recall_at_100_diff1
value: 11.497443532756819
- type: nauc_recall_at_100_max
value: 20.196970431621615
- type: nauc_recall_at_100_std
value: 23.688772100803433
- type: nauc_recall_at_10_diff1
value: 16.519851398596003
- type: nauc_recall_at_10_max
value: 0.774066845071221
- type: nauc_recall_at_10_std
value: -10.89514647001814
- type: nauc_recall_at_1_diff1
value: 43.96699602275052
- type: nauc_recall_at_1_max
value: -0.7577088440873263
- type: nauc_recall_at_1_std
value: -17.732463891968404
- type: nauc_recall_at_20_diff1
value: 15.202960269878258
- type: nauc_recall_at_20_max
value: 7.067263295590253
- type: nauc_recall_at_20_std
value: -0.06050108222640702
- type: nauc_recall_at_3_diff1
value: 24.066741361525125
- type: nauc_recall_at_3_max
value: -2.1961525860488424
- type: nauc_recall_at_3_std
value: -19.48307077749568
- type: nauc_recall_at_5_diff1
value: 20.086330794102707
- type: nauc_recall_at_5_max
value: -0.8866528062747986
- type: nauc_recall_at_5_std
value: -16.53799173962747
- type: ndcg_at_1
value: 57.99999999999999
- type: ndcg_at_10
value: 45.553
- type: ndcg_at_100
value: 51.014
- type: ndcg_at_1000
value: 58.226
- type: ndcg_at_20
value: 44.98
- type: ndcg_at_3
value: 48.981
- type: ndcg_at_5
value: 46.794999999999995
- type: precision_at_1
value: 70.75
- type: precision_at_10
value: 36.85
- type: precision_at_100
value: 11.955
- type: precision_at_1000
value: 2.247
- type: precision_at_20
value: 28.075
- type: precision_at_3
value: 52.666999999999994
- type: precision_at_5
value: 45.85
- type: recall_at_1
value: 9.168999999999999
- type: recall_at_10
value: 28.796
- type: recall_at_100
value: 58.892999999999994
- type: recall_at_1000
value: 81.644
- type: recall_at_20
value: 36.659000000000006
- type: recall_at_3
value: 16.709
- type: recall_at_5
value: 21.387
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: main_score
value: 88.41
- type: map_at_1
value: 75.637
- type: map_at_10
value: 84.674
- type: map_at_100
value: 84.909
- type: map_at_1000
value: 84.92
- type: map_at_20
value: 84.836
- type: map_at_3
value: 83.44200000000001
- type: map_at_5
value: 84.28099999999999
- type: mrr_at_1
value: 81.56315631563157
- type: mrr_at_10
value: 88.89571695264748
- type: mrr_at_100
value: 88.93671417216285
- type: mrr_at_1000
value: 88.93708016011664
- type: mrr_at_20
value: 88.9311652665256
- type: mrr_at_3
value: 88.20882088208805
- type: mrr_at_5
value: 88.72937293729349
- type: nauc_map_at_1000_diff1
value: 54.41216035074026
- type: nauc_map_at_1000_max
value: 13.346153003554361
- type: nauc_map_at_1000_std
value: -6.721664416152164
- type: nauc_map_at_100_diff1
value: 54.36538350995795
- type: nauc_map_at_100_max
value: 13.355583381471298
- type: nauc_map_at_100_std
value: -6.696921015641016
- type: nauc_map_at_10_diff1
value: 54.0389127730555
- type: nauc_map_at_10_max
value: 13.387802159150663
- type: nauc_map_at_10_std
value: -6.73514381731833
- type: nauc_map_at_1_diff1
value: 57.99489574836453
- type: nauc_map_at_1_max
value: 7.830032589171654
- type: nauc_map_at_1_std
value: -10.140208285080295
- type: nauc_map_at_20_diff1
value: 54.16841004736076
- type: nauc_map_at_20_max
value: 13.345607363689746
- type: nauc_map_at_20_std
value: -6.663119775158465
- type: nauc_map_at_3_diff1
value: 53.82879543599303
- type: nauc_map_at_3_max
value: 12.716952288433902
- type: nauc_map_at_3_std
value: -7.746102082835598
- type: nauc_map_at_5_diff1
value: 53.82838395350109
- type: nauc_map_at_5_max
value: 13.487373534211702
- type: nauc_map_at_5_std
value: -6.869504398693434
- type: nauc_mrr_at_1000_diff1
value: 68.92783546581906
- type: nauc_mrr_at_1000_max
value: 12.076297180596592
- type: nauc_mrr_at_1000_std
value: -13.306257067567998
- type: nauc_mrr_at_100_diff1
value: 68.92780219775517
- type: nauc_mrr_at_100_max
value: 12.078449805054374
- type: nauc_mrr_at_100_std
value: -13.303524852703719
- type: nauc_mrr_at_10_diff1
value: 68.92686206881258
- type: nauc_mrr_at_10_max
value: 12.273295656884873
- type: nauc_mrr_at_10_std
value: -13.222483496603965
- type: nauc_mrr_at_1_diff1
value: 70.1738022073041
- type: nauc_mrr_at_1_max
value: 9.378639533482806
- type: nauc_mrr_at_1_std
value: -13.444033823202348
- type: nauc_mrr_at_20_diff1
value: 68.91161304905303
- type: nauc_mrr_at_20_max
value: 12.117091514817885
- type: nauc_mrr_at_20_std
value: -13.258261750160239
- type: nauc_mrr_at_3_diff1
value: 68.61982455945467
- type: nauc_mrr_at_3_max
value: 12.608213879734578
- type: nauc_mrr_at_3_std
value: -13.558003431587839
- type: nauc_mrr_at_5_diff1
value: 68.81439097457242
- type: nauc_mrr_at_5_max
value: 12.54025598903624
- type: nauc_mrr_at_5_std
value: -13.199231514972093
- type: nauc_ndcg_at_1000_diff1
value: 56.47563443877495
- type: nauc_ndcg_at_1000_max
value: 14.508331783439466
- type: nauc_ndcg_at_1000_std
value: -6.206829736668775
- type: nauc_ndcg_at_100_diff1
value: 55.54015515673474
- type: nauc_ndcg_at_100_max
value: 14.753595778278136
- type: nauc_ndcg_at_100_std
value: -5.638517949568802
- type: nauc_ndcg_at_10_diff1
value: 54.220845223257996
- type: nauc_ndcg_at_10_max
value: 15.265309648490021
- type: nauc_ndcg_at_10_std
value: -5.516276098929109
- type: nauc_ndcg_at_1_diff1
value: 70.1738022073041
- type: nauc_ndcg_at_1_max
value: 9.378639533482806
- type: nauc_ndcg_at_1_std
value: -13.444033823202348
- type: nauc_ndcg_at_20_diff1
value: 54.481406100854635
- type: nauc_ndcg_at_20_max
value: 14.868763583210498
- type: nauc_ndcg_at_20_std
value: -5.328097380018734
- type: nauc_ndcg_at_3_diff1
value: 54.94411725607744
- type: nauc_ndcg_at_3_max
value: 14.27186734506607
- type: nauc_ndcg_at_3_std
value: -7.894724962312474
- type: nauc_ndcg_at_5_diff1
value: 54.08048166974806
- type: nauc_ndcg_at_5_max
value: 15.528233170721006
- type: nauc_ndcg_at_5_std
value: -5.984768714537104
- type: nauc_precision_at_1000_diff1
value: -8.744323640074445
- type: nauc_precision_at_1000_max
value: -0.01881224392053465
- type: nauc_precision_at_1000_std
value: 3.8721477979260635
- type: nauc_precision_at_100_diff1
value: -11.86150156952171
- type: nauc_precision_at_100_max
value: 3.2736651314552314
- type: nauc_precision_at_100_std
value: 8.12687620615509
- type: nauc_precision_at_10_diff1
value: -10.360708676781178
- type: nauc_precision_at_10_max
value: 10.945552490433458
- type: nauc_precision_at_10_std
value: 11.016707653014485
- type: nauc_precision_at_1_diff1
value: 70.1738022073041
- type: nauc_precision_at_1_max
value: 9.378639533482806
- type: nauc_precision_at_1_std
value: -13.444033823202348
- type: nauc_precision_at_20_diff1
value: -13.557721925696583
- type: nauc_precision_at_20_max
value: 6.331386521718574
- type: nauc_precision_at_20_std
value: 10.322188778142388
- type: nauc_precision_at_3_diff1
value: 15.139456770248968
- type: nauc_precision_at_3_max
value: 17.10220985600708
- type: nauc_precision_at_3_std
value: 3.0448183682558074
- type: nauc_precision_at_5_diff1
value: -1.9825577548111102
- type: nauc_precision_at_5_max
value: 17.139148127012625
- type: nauc_precision_at_5_std
value: 10.598435750554753
- type: nauc_recall_at_1000_diff1
value: 15.641740744283005
- type: nauc_recall_at_1000_max
value: 44.65315702195612
- type: nauc_recall_at_1000_std
value: 52.34265862835513
- type: nauc_recall_at_100_diff1
value: 5.254385435323394
- type: nauc_recall_at_100_max
value: 38.53577774395794
- type: nauc_recall_at_100_std
value: 43.47744274335829
- type: nauc_recall_at_10_diff1
value: 19.135735476268042
- type: nauc_recall_at_10_max
value: 30.05417445923848
- type: nauc_recall_at_10_std
value: 18.3988023241141
- type: nauc_recall_at_1_diff1
value: 57.99489574836453
- type: nauc_recall_at_1_max
value: 7.830032589171654
- type: nauc_recall_at_1_std
value: -10.140208285080295
- type: nauc_recall_at_20_diff1
value: 9.444797759735126
- type: nauc_recall_at_20_max
value: 31.001311675371017
- type: nauc_recall_at_20_std
value: 29.351418893822178
- type: nauc_recall_at_3_diff1
value: 36.88862653262064
- type: nauc_recall_at_3_max
value: 19.845892741607823
- type: nauc_recall_at_3_std
value: -1.0584273105890794
- type: nauc_recall_at_5_diff1
value: 27.360718561944974
- type: nauc_recall_at_5_max
value: 26.698311215441738
- type: nauc_recall_at_5_std
value: 8.97113997755362
- type: ndcg_at_1
value: 81.563
- type: ndcg_at_10
value: 88.41
- type: ndcg_at_100
value: 89.101
- type: ndcg_at_1000
value: 89.25800000000001
- type: ndcg_at_20
value: 88.79
- type: ndcg_at_3
value: 86.599
- type: ndcg_at_5
value: 87.74
- type: precision_at_1
value: 81.563
- type: precision_at_10
value: 10.699
- type: precision_at_100
value: 1.13
- type: precision_at_1000
value: 0.116
- type: precision_at_20
value: 5.479
- type: precision_at_3
value: 33.238
- type: precision_at_5
value: 20.744
- type: recall_at_1
value: 75.637
- type: recall_at_10
value: 95.57600000000001
- type: recall_at_100
value: 98.072
- type: recall_at_1000
value: 98.951
- type: recall_at_20
value: 96.792
- type: recall_at_3
value: 90.79599999999999
- type: recall_at_5
value: 93.674
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: main_score
value: 42.396
- type: map_at_1
value: 21.711
- type: map_at_10
value: 34.628
- type: map_at_100
value: 36.549
- type: map_at_1000
value: 36.719
- type: map_at_20
value: 35.673
- type: map_at_3
value: 30.585
- type: map_at_5
value: 32.875
- type: mrr_at_1
value: 41.82098765432099
- type: mrr_at_10
value: 50.69505682931607
- type: mrr_at_100
value: 51.50556608727901
- type: mrr_at_1000
value: 51.53870583208304
- type: mrr_at_20
value: 51.15345764364655
- type: mrr_at_3
value: 48.35390946502059
- type: mrr_at_5
value: 49.87397119341563
- type: nauc_map_at_1000_diff1
value: 45.182252919583895
- type: nauc_map_at_1000_max
value: 35.66124930024801
- type: nauc_map_at_1000_std
value: -0.6925562638650965
- type: nauc_map_at_100_diff1
value: 45.116964706960125
- type: nauc_map_at_100_max
value: 35.54990469525889
- type: nauc_map_at_100_std
value: -0.6667263852859368
- type: nauc_map_at_10_diff1
value: 45.39189096228184
- type: nauc_map_at_10_max
value: 34.780111261901
- type: nauc_map_at_10_std
value: -1.8169859294150819
- type: nauc_map_at_1_diff1
value: 47.72764937952259
- type: nauc_map_at_1_max
value: 24.83306559709341
- type: nauc_map_at_1_std
value: -4.714128457297418
- type: nauc_map_at_20_diff1
value: 45.17073365898278
- type: nauc_map_at_20_max
value: 35.0938403469058
- type: nauc_map_at_20_std
value: -1.373412631183604
- type: nauc_map_at_3_diff1
value: 46.525724305731295
- type: nauc_map_at_3_max
value: 31.042538866512597
- type: nauc_map_at_3_std
value: -4.119355935975354
- type: nauc_map_at_5_diff1
value: 45.79569633383187
- type: nauc_map_at_5_max
value: 32.88779656647293
- type: nauc_map_at_5_std
value: -3.2518474739335312
- type: nauc_mrr_at_1000_diff1
value: 52.83619185487903
- type: nauc_mrr_at_1000_max
value: 42.30310720405186
- type: nauc_mrr_at_1000_std
value: -1.1487703348518024
- type: nauc_mrr_at_100_diff1
value: 52.82248853996664
- type: nauc_mrr_at_100_max
value: 42.30549701564678
- type: nauc_mrr_at_100_std
value: -1.1240113031894834
- type: nauc_mrr_at_10_diff1
value: 52.74644276642243
- type: nauc_mrr_at_10_max
value: 42.39103029476398
- type: nauc_mrr_at_10_std
value: -1.1043413237848576
- type: nauc_mrr_at_1_diff1
value: 54.810335521617326
- type: nauc_mrr_at_1_max
value: 40.733260207843394
- type: nauc_mrr_at_1_std
value: -4.452554921565855
- type: nauc_mrr_at_20_diff1
value: 52.788257862499954
- type: nauc_mrr_at_20_max
value: 42.32658875363406
- type: nauc_mrr_at_20_std
value: -1.2209728080684497
- type: nauc_mrr_at_3_diff1
value: 53.43281175319808
- type: nauc_mrr_at_3_max
value: 41.735942650867926
- type: nauc_mrr_at_3_std
value: -2.462688102468019
- type: nauc_mrr_at_5_diff1
value: 52.874037126566606
- type: nauc_mrr_at_5_max
value: 41.93740449458822
- type: nauc_mrr_at_5_std
value: -1.2928874908441947
- type: nauc_ndcg_at_1000_diff1
value: 46.5532425476402
- type: nauc_ndcg_at_1000_max
value: 40.369611603370515
- type: nauc_ndcg_at_1000_std
value: 3.472567588386994
- type: nauc_ndcg_at_100_diff1
value: 45.75244404695404
- type: nauc_ndcg_at_100_max
value: 39.36470550675439
- type: nauc_ndcg_at_100_std
value: 4.356189041115731
- type: nauc_ndcg_at_10_diff1
value: 46.005135323539704
- type: nauc_ndcg_at_10_max
value: 37.89018165334218
- type: nauc_ndcg_at_10_std
value: 0.7129618297768014
- type: nauc_ndcg_at_1_diff1
value: 54.810335521617326
- type: nauc_ndcg_at_1_max
value: 40.733260207843394
- type: nauc_ndcg_at_1_std
value: -4.452554921565855
- type: nauc_ndcg_at_20_diff1
value: 45.841552790490034
- type: nauc_ndcg_at_20_max
value: 38.04992825472661
- type: nauc_ndcg_at_20_std
value: 1.2748305707955212
- type: nauc_ndcg_at_3_diff1
value: 46.683033449357744
- type: nauc_ndcg_at_3_max
value: 37.46397870760607
- type: nauc_ndcg_at_3_std
value: -2.3421854966319824
- type: nauc_ndcg_at_5_diff1
value: 45.82409645378457
- type: nauc_ndcg_at_5_max
value: 36.27588234096716
- type: nauc_ndcg_at_5_std
value: -1.5141197170944254
- type: nauc_precision_at_1000_diff1
value: -3.137944321071885
- type: nauc_precision_at_1000_max
value: 24.12803166253776
- type: nauc_precision_at_1000_std
value: 11.076454789944101
- type: nauc_precision_at_100_diff1
value: 3.9896283891401048
- type: nauc_precision_at_100_max
value: 31.00198316788829
- type: nauc_precision_at_100_std
value: 15.725887643803063
- type: nauc_precision_at_10_diff1
value: 20.493420889888394
- type: nauc_precision_at_10_max
value: 41.689699671507405
- type: nauc_precision_at_10_std
value: 9.374983385669914
- type: nauc_precision_at_1_diff1
value: 54.810335521617326
- type: nauc_precision_at_1_max
value: 40.733260207843394
- type: nauc_precision_at_1_std
value: -4.452554921565855
- type: nauc_precision_at_20_diff1
value: 15.02911800246446
- type: nauc_precision_at_20_max
value: 39.227068888505
- type: nauc_precision_at_20_std
value: 11.755558515319404
- type: nauc_precision_at_3_diff1
value: 34.044986535461746
- type: nauc_precision_at_3_max
value: 40.96605829831656
- type: nauc_precision_at_3_std
value: 1.1903535705688038
- type: nauc_precision_at_5_diff1
value: 26.617002443432707
- type: nauc_precision_at_5_max
value: 40.60413785916794
- type: nauc_precision_at_5_std
value: 3.6984531670502814
- type: nauc_recall_at_1000_diff1
value: 26.96489389440101
- type: nauc_recall_at_1000_max
value: 41.811583968523955
- type: nauc_recall_at_1000_std
value: 41.5719519496712
- type: nauc_recall_at_100_diff1
value: 28.50851434908223
- type: nauc_recall_at_100_max
value: 32.19528060706322
- type: nauc_recall_at_100_std
value: 25.56935294258179
- type: nauc_recall_at_10_diff1
value: 35.139582891180964
- type: nauc_recall_at_10_max
value: 32.15221840434225
- type: nauc_recall_at_10_std
value: 5.550434611582702
- type: nauc_recall_at_1_diff1
value: 47.72764937952259
- type: nauc_recall_at_1_max
value: 24.83306559709341
- type: nauc_recall_at_1_std
value: -4.714128457297418
- type: nauc_recall_at_20_diff1
value: 32.78604811055205
- type: nauc_recall_at_20_max
value: 29.62940720700254
- type: nauc_recall_at_20_std
value: 6.769941491859872
- type: nauc_recall_at_3_diff1
value: 40.76090616138699
- type: nauc_recall_at_3_max
value: 27.506425490226867
- type: nauc_recall_at_3_std
value: -2.608872693119243
- type: nauc_recall_at_5_diff1
value: 37.06532485024711
- type: nauc_recall_at_5_max
value: 27.704150556658448
- type: nauc_recall_at_5_std
value: 0.4718707152343872
- type: ndcg_at_1
value: 41.821000000000005
- type: ndcg_at_10
value: 42.396
- type: ndcg_at_100
value: 49.370000000000005
- type: ndcg_at_1000
value: 52.251000000000005
- type: ndcg_at_20
value: 45.097
- type: ndcg_at_3
value: 39.028
- type: ndcg_at_5
value: 40.222
- type: precision_at_1
value: 41.821000000000005
- type: precision_at_10
value: 11.451
- type: precision_at_100
value: 1.863
- type: precision_at_1000
value: 0.23900000000000002
- type: precision_at_20
value: 6.798
- type: precision_at_3
value: 25.823
- type: precision_at_5
value: 18.735
- type: recall_at_1
value: 21.711
- type: recall_at_10
value: 48.862
- type: recall_at_100
value: 74.708
- type: recall_at_1000
value: 91.865
- type: recall_at_20
value: 57.50999999999999
- type: recall_at_3
value: 35.85
- type: recall_at_5
value: 41.976
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: main_score
value: 72.21
- type: map_at_1
value: 39.487
- type: map_at_10
value: 63.949999999999996
- type: map_at_100
value: 64.873
- type: map_at_1000
value: 64.927
- type: map_at_20
value: 64.529
- type: map_at_3
value: 60.243
- type: map_at_5
value: 62.613
- type: mrr_at_1
value: 78.97366644159351
- type: mrr_at_10
value: 84.84600173627825
- type: mrr_at_100
value: 85.0172804866798
- type: mrr_at_1000
value: 85.02245651152857
- type: mrr_at_20
value: 84.9625577788225
- type: mrr_at_3
value: 83.90276839972962
- type: mrr_at_5
value: 84.48278190411845
- type: nauc_map_at_1000_diff1
value: 19.825004700775164
- type: nauc_map_at_1000_max
value: 19.943221724164182
- type: nauc_map_at_1000_std
value: 10.068951166560058
- type: nauc_map_at_100_diff1
value: 19.80139472181137
- type: nauc_map_at_100_max
value: 19.938006132804347
- type: nauc_map_at_100_std
value: 10.100008107666842
- type: nauc_map_at_10_diff1
value: 19.53604502514735
- type: nauc_map_at_10_max
value: 19.62768870331064
- type: nauc_map_at_10_std
value: 9.446859074725705
- type: nauc_map_at_1_diff1
value: 67.7764270505257
- type: nauc_map_at_1_max
value: 38.45166604737058
- type: nauc_map_at_1_std
value: 1.9919181988552352
- type: nauc_map_at_20_diff1
value: 19.635871913149913
- type: nauc_map_at_20_max
value: 19.812838965919155
- type: nauc_map_at_20_std
value: 9.905163140101845
- type: nauc_map_at_3_diff1
value: 18.965707122532212
- type: nauc_map_at_3_max
value: 17.878860313056517
- type: nauc_map_at_3_std
value: 6.189378752019195
- type: nauc_map_at_5_diff1
value: 19.493354049675954
- type: nauc_map_at_5_max
value: 19.24527088109141
- type: nauc_map_at_5_std
value: 8.283883139680066
- type: nauc_mrr_at_1000_diff1
value: 66.87150374356781
- type: nauc_mrr_at_1000_max
value: 41.413456443203984
- type: nauc_mrr_at_1000_std
value: 4.140387282484357
- type: nauc_mrr_at_100_diff1
value: 66.87178015619061
- type: nauc_mrr_at_100_max
value: 41.419754763150834
- type: nauc_mrr_at_100_std
value: 4.15222235416704
- type: nauc_mrr_at_10_diff1
value: 66.89720586892301
- type: nauc_mrr_at_10_max
value: 41.56353878125211
- type: nauc_mrr_at_10_std
value: 4.213376519922392
- type: nauc_mrr_at_1_diff1
value: 67.7764270505257
- type: nauc_mrr_at_1_max
value: 38.45166604737058
- type: nauc_mrr_at_1_std
value: 1.9919181988552352
- type: nauc_mrr_at_20_diff1
value: 66.8714688713149
- type: nauc_mrr_at_20_max
value: 41.46170778986735
- type: nauc_mrr_at_20_std
value: 4.165154741309859
- type: nauc_mrr_at_3_diff1
value: 66.31615462679144
- type: nauc_mrr_at_3_max
value: 41.419637693259936
- type: nauc_mrr_at_3_std
value: 3.814834551396097
- type: nauc_mrr_at_5_diff1
value: 66.7289413087213
- type: nauc_mrr_at_5_max
value: 41.668346356371586
- type: nauc_mrr_at_5_std
value: 4.116331539882484
- type: nauc_ndcg_at_1000_diff1
value: 26.37325375970598
- type: nauc_ndcg_at_1000_max
value: 24.850915174721735
- type: nauc_ndcg_at_1000_std
value: 13.37585683440429
- type: nauc_ndcg_at_100_diff1
value: 25.591771178059503
- type: nauc_ndcg_at_100_max
value: 24.562820829532473
- type: nauc_ndcg_at_100_std
value: 14.093690500501541
- type: nauc_ndcg_at_10_diff1
value: 24.64600598115805
- type: nauc_ndcg_at_10_max
value: 23.543499404760023
- type: nauc_ndcg_at_10_std
value: 11.55823632781553
- type: nauc_ndcg_at_1_diff1
value: 67.7764270505257
- type: nauc_ndcg_at_1_max
value: 38.45166604737058
- type: nauc_ndcg_at_1_std
value: 1.9919181988552352
- type: nauc_ndcg_at_20_diff1
value: 24.757843275306726
- type: nauc_ndcg_at_20_max
value: 23.951154200380827
- type: nauc_ndcg_at_20_std
value: 12.931320453044886
- type: nauc_ndcg_at_3_diff1
value: 24.37742630418847
- type: nauc_ndcg_at_3_max
value: 21.310512304883723
- type: nauc_ndcg_at_3_std
value: 6.503993200818077
- type: nauc_ndcg_at_5_diff1
value: 24.813706829269716
- type: nauc_ndcg_at_5_max
value: 22.993657212898
- type: nauc_ndcg_at_5_std
value: 9.34462052506809
- type: nauc_precision_at_1000_diff1
value: -0.6506415756958156
- type: nauc_precision_at_1000_max
value: 28.039755644694875
- type: nauc_precision_at_1000_std
value: 53.46474329623814
- type: nauc_precision_at_100_diff1
value: 3.78462668236152
- type: nauc_precision_at_100_max
value: 22.501700881673862
- type: nauc_precision_at_100_std
value: 40.56672716474142
- type: nauc_precision_at_10_diff1
value: 9.156113228907534
- type: nauc_precision_at_10_max
value: 19.734206254833254
- type: nauc_precision_at_10_std
value: 19.986282545779602
- type: nauc_precision_at_1_diff1
value: 67.7764270505257
- type: nauc_precision_at_1_max
value: 38.45166604737058
- type: nauc_precision_at_1_std
value: 1.9919181988552352
- type: nauc_precision_at_20_diff1
value: 6.6164335644470125
- type: nauc_precision_at_20_max
value: 20.29343459608317
- type: nauc_precision_at_20_std
value: 26.51115475333977
- type: nauc_precision_at_3_diff1
value: 12.476520554399546
- type: nauc_precision_at_3_max
value: 16.69401409858964
- type: nauc_precision_at_3_std
value: 8.165880294907444
- type: nauc_precision_at_5_diff1
value: 11.783242828320958
- type: nauc_precision_at_5_max
value: 19.0679467875759
- type: nauc_precision_at_5_std
value: 13.615358345509884
- type: nauc_recall_at_1000_diff1
value: -0.6506415756960168
- type: nauc_recall_at_1000_max
value: 28.039755644694786
- type: nauc_recall_at_1000_std
value: 53.46474329623801
- type: nauc_recall_at_100_diff1
value: 3.7846266823613877
- type: nauc_recall_at_100_max
value: 22.501700881674008
- type: nauc_recall_at_100_std
value: 40.566727164741366
- type: nauc_recall_at_10_diff1
value: 9.15611322890755
- type: nauc_recall_at_10_max
value: 19.73420625483318
- type: nauc_recall_at_10_std
value: 19.98628254577951
- type: nauc_recall_at_1_diff1
value: 67.7764270505257
- type: nauc_recall_at_1_max
value: 38.45166604737058
- type: nauc_recall_at_1_std
value: 1.9919181988552352
- type: nauc_recall_at_20_diff1
value: 6.616433564446929
- type: nauc_recall_at_20_max
value: 20.293434596083248
- type: nauc_recall_at_20_std
value: 26.5111547533396
- type: nauc_recall_at_3_diff1
value: 12.476520554399531
- type: nauc_recall_at_3_max
value: 16.69401409858966
- type: nauc_recall_at_3_std
value: 8.165880294907438
- type: nauc_recall_at_5_diff1
value: 11.783242828320999
- type: nauc_recall_at_5_max
value: 19.067946787575845
- type: nauc_recall_at_5_std
value: 13.61535834550991
- type: ndcg_at_1
value: 78.974
- type: ndcg_at_10
value: 72.21
- type: ndcg_at_100
value: 75.264
- type: ndcg_at_1000
value: 76.259
- type: ndcg_at_20
value: 73.628
- type: ndcg_at_3
value: 67.047
- type: ndcg_at_5
value: 69.974
- type: precision_at_1
value: 78.974
- type: precision_at_10
value: 15.267
- type: precision_at_100
value: 1.762
- type: precision_at_1000
value: 0.189
- type: precision_at_20
value: 8.09
- type: precision_at_3
value: 43.309
- type: precision_at_5
value: 28.294000000000004
- type: recall_at_1
value: 39.487
- type: recall_at_10
value: 76.334
- type: recall_at_100
value: 88.076
- type: recall_at_1000
value: 94.59100000000001
- type: recall_at_20
value: 80.898
- type: recall_at_3
value: 64.96300000000001
- type: recall_at_5
value: 70.736
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: main_score
value: 42.027
- type: map_at_1
value: 22.118
- type: map_at_10
value: 34.816
- type: map_at_100
value: 35.983
- type: map_at_1000
value: 36.028999999999996
- type: map_at_20
value: 35.545
- type: map_at_3
value: 30.752000000000002
- type: map_at_5
value: 33.114
- type: mrr_at_1
value: 22.793696275071635
- type: mrr_at_10
value: 35.47250079592483
- type: mrr_at_100
value: 36.576471512902856
- type: mrr_at_1000
value: 36.616205680509786
- type: mrr_at_20
value: 36.16557033864942
- type: mrr_at_3
value: 31.48758357211065
- type: mrr_at_5
value: 33.80563514804202
- type: nauc_map_at_1000_diff1
value: 32.89234100489284
- type: nauc_map_at_1000_max
value: 1.1802816553581001
- type: nauc_map_at_1000_std
value: -20.187692925732446
- type: nauc_map_at_100_diff1
value: 32.88694493681772
- type: nauc_map_at_100_max
value: 1.1732717578080365
- type: nauc_map_at_100_std
value: -20.164165529035245
- type: nauc_map_at_10_diff1
value: 32.826182211848796
- type: nauc_map_at_10_max
value: 1.1551262165737235
- type: nauc_map_at_10_std
value: -20.88326292319754
- type: nauc_map_at_1_diff1
value: 36.12732122790642
- type: nauc_map_at_1_max
value: 1.8197550109156913
- type: nauc_map_at_1_std
value: -17.205625720792167
- type: nauc_map_at_20_diff1
value: 32.83333177195551
- type: nauc_map_at_20_max
value: 1.0937431645506202
- type: nauc_map_at_20_std
value: -20.503956514646145
- type: nauc_map_at_3_diff1
value: 32.76264193805814
- type: nauc_map_at_3_max
value: 0.8560962042500389
- type: nauc_map_at_3_std
value: -20.608930717315577
- type: nauc_map_at_5_diff1
value: 32.78673238978775
- type: nauc_map_at_5_max
value: 1.0511863039329437
- type: nauc_map_at_5_std
value: -21.02164728626011
- type: nauc_mrr_at_1000_diff1
value: 32.610323934702286
- type: nauc_mrr_at_1000_max
value: 1.276669121901405
- type: nauc_mrr_at_1000_std
value: -19.908120615285043
- type: nauc_mrr_at_100_diff1
value: 32.601373758102795
- type: nauc_mrr_at_100_max
value: 1.2752735149992132
- type: nauc_mrr_at_100_std
value: -19.87937042610101
- type: nauc_mrr_at_10_diff1
value: 32.55795432078168
- type: nauc_mrr_at_10_max
value: 1.2881786969258637
- type: nauc_mrr_at_10_std
value: -20.54564519015977
- type: nauc_mrr_at_1_diff1
value: 35.596301376443726
- type: nauc_mrr_at_1_max
value: 1.7633238037306902
- type: nauc_mrr_at_1_std
value: -17.1999420019887
- type: nauc_mrr_at_20_diff1
value: 32.57185739111023
- type: nauc_mrr_at_20_max
value: 1.2212620853201877
- type: nauc_mrr_at_20_std
value: -20.179517281041264
- type: nauc_mrr_at_3_diff1
value: 32.42681377099514
- type: nauc_mrr_at_3_max
value: 0.8745921708861145
- type: nauc_mrr_at_3_std
value: -20.41017687790572
- type: nauc_mrr_at_5_diff1
value: 32.499107129648266
- type: nauc_mrr_at_5_max
value: 1.1159673851851573
- type: nauc_mrr_at_5_std
value: -20.695143502133824
- type: nauc_ndcg_at_1000_diff1
value: 32.16957965806702
- type: nauc_ndcg_at_1000_max
value: 1.6763998947980905
- type: nauc_ndcg_at_1000_std
value: -18.970592350332893
- type: nauc_ndcg_at_100_diff1
value: 31.977550102558872
- type: nauc_ndcg_at_100_max
value: 1.5625858650110014
- type: nauc_ndcg_at_100_std
value: -17.990456766123835
- type: nauc_ndcg_at_10_diff1
value: 31.82738932481356
- type: nauc_ndcg_at_10_max
value: 1.1661362042692103
- type: nauc_ndcg_at_10_std
value: -21.872680193994217
- type: nauc_ndcg_at_1_diff1
value: 35.596301376443726
- type: nauc_ndcg_at_1_max
value: 1.7633238037306902
- type: nauc_ndcg_at_1_std
value: -17.1999420019887
- type: nauc_ndcg_at_20_diff1
value: 31.749656399266264
- type: nauc_ndcg_at_20_max
value: 0.9629024493088691
- type: nauc_ndcg_at_20_std
value: -20.4379403899277
- type: nauc_ndcg_at_3_diff1
value: 31.731361436850836
- type: nauc_ndcg_at_3_max
value: 0.531749791578849
- type: nauc_ndcg_at_3_std
value: -21.551112910698674
- type: nauc_ndcg_at_5_diff1
value: 31.785373941157303
- type: nauc_ndcg_at_5_max
value: 0.86207769368333
- type: nauc_ndcg_at_5_std
value: -22.24923399160171
- type: nauc_precision_at_1000_diff1
value: -3.841288331986519
- type: nauc_precision_at_1000_max
value: 13.558041371634976
- type: nauc_precision_at_1000_std
value: 15.181510484512827
- type: nauc_precision_at_100_diff1
value: 12.441154582709053
- type: nauc_precision_at_100_max
value: 8.428136255841935
- type: nauc_precision_at_100_std
value: 14.710391839731656
- type: nauc_precision_at_10_diff1
value: 26.185854813986705
- type: nauc_precision_at_10_max
value: 1.6348387310504464
- type: nauc_precision_at_10_std
value: -23.448927004357298
- type: nauc_precision_at_1_diff1
value: 35.596301376443726
- type: nauc_precision_at_1_max
value: 1.7633238037306902
- type: nauc_precision_at_1_std
value: -17.1999420019887
- type: nauc_precision_at_20_diff1
value: 22.69194179544158
- type: nauc_precision_at_20_max
value: 1.2972015009169306
- type: nauc_precision_at_20_std
value: -15.751482380060269
- type: nauc_precision_at_3_diff1
value: 28.255531512125188
- type: nauc_precision_at_3_max
value: -0.3715575458464333
- type: nauc_precision_at_3_std
value: -24.227970454057697
- type: nauc_precision_at_5_diff1
value: 27.65497951098847
- type: nauc_precision_at_5_max
value: 0.449773375292472
- type: nauc_precision_at_5_std
value: -25.37445450938601
- type: nauc_recall_at_1000_diff1
value: 15.243948516763819
- type: nauc_recall_at_1000_max
value: 41.821227805251375
- type: nauc_recall_at_1000_std
value: 61.66297794838101
- type: nauc_recall_at_100_diff1
value: 24.516543685029994
- type: nauc_recall_at_100_max
value: 7.093972966253228
- type: nauc_recall_at_100_std
value: 17.244452321212282
- type: nauc_recall_at_10_diff1
value: 28.404243095182828
- type: nauc_recall_at_10_max
value: 1.0805210480930945
- type: nauc_recall_at_10_std
value: -24.885018657039527
- type: nauc_recall_at_1_diff1
value: 36.12732122790642
- type: nauc_recall_at_1_max
value: 1.8197550109156913
- type: nauc_recall_at_1_std
value: -17.205625720792167
- type: nauc_recall_at_20_diff1
value: 26.956250169438512
- type: nauc_recall_at_20_max
value: 0.023973408161285917
- type: nauc_recall_at_20_std
value: -18.32944444428131
- type: nauc_recall_at_3_diff1
value: 28.9894205130054
- type: nauc_recall_at_3_max
value: -0.36140658021466865
- type: nauc_recall_at_3_std
value: -24.022505107768364
- type: nauc_recall_at_5_diff1
value: 28.907023434955104
- type: nauc_recall_at_5_max
value: 0.2501037567297729
- type: nauc_recall_at_5_std
value: -25.719919602271496
- type: ndcg_at_1
value: 22.794
- type: ndcg_at_10
value: 42.027
- type: ndcg_at_100
value: 47.601
- type: ndcg_at_1000
value: 48.713
- type: ndcg_at_20
value: 44.623000000000005
- type: ndcg_at_3
value: 33.772999999999996
- type: ndcg_at_5
value: 37.991
- type: precision_at_1
value: 22.794
- type: precision_at_10
value: 6.711
- type: precision_at_100
value: 0.9490000000000001
- type: precision_at_1000
value: 0.105
- type: precision_at_20
value: 3.8920000000000003
- type: precision_at_3
value: 14.46
- type: precision_at_5
value: 10.822
- type: recall_at_1
value: 22.118
- type: recall_at_10
value: 64.201
- type: recall_at_100
value: 89.878
- type: recall_at_1000
value: 98.259
- type: recall_at_20
value: 74.34100000000001
- type: recall_at_3
value: 41.8
- type: recall_at_5
value: 51.959
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: main_score
value: 36.201
- type: map_at_1
value: 5.654
- type: map_at_10
value: 13.402
- type: map_at_100
value: 16.849
- type: map_at_1000
value: 18.264
- type: map_at_20
value: 14.832
- type: map_at_3
value: 9.619
- type: map_at_5
value: 11.483
- type: mrr_at_1
value: 47.6780185758514
- type: mrr_at_10
value: 56.47906531033466
- type: mrr_at_100
value: 57.04539749991402
- type: mrr_at_1000
value: 57.08810157607369
- type: mrr_at_20
value: 56.88003170105462
- type: mrr_at_3
value: 54.43756449948401
- type: mrr_at_5
value: 55.660474716202266
- type: nauc_map_at_1000_diff1
value: 31.134615238698192
- type: nauc_map_at_1000_max
value: 36.09522002487132
- type: nauc_map_at_1000_std
value: 14.72627666649002
- type: nauc_map_at_100_diff1
value: 32.777473351864444
- type: nauc_map_at_100_max
value: 35.25391471621035
- type: nauc_map_at_100_std
value: 12.024428973861083
- type: nauc_map_at_10_diff1
value: 36.46466466148528
- type: nauc_map_at_10_max
value: 29.707805406826722
- type: nauc_map_at_10_std
value: 2.0678757794226335
- type: nauc_map_at_1_diff1
value: 54.30208426149679
- type: nauc_map_at_1_max
value: 18.69125148481608
- type: nauc_map_at_1_std
value: -8.970955660291802
- type: nauc_map_at_20_diff1
value: 34.76513311600623
- type: nauc_map_at_20_max
value: 32.20666003570514
- type: nauc_map_at_20_std
value: 5.924889441518581
- type: nauc_map_at_3_diff1
value: 45.73465176835491
- type: nauc_map_at_3_max
value: 23.492291524989106
- type: nauc_map_at_3_std
value: -5.0123536561688855
- type: nauc_map_at_5_diff1
value: 39.7128319374107
- type: nauc_map_at_5_max
value: 25.84231729559691
- type: nauc_map_at_5_std
value: -2.0861428981140344
- type: nauc_mrr_at_1000_diff1
value: 33.0997881703397
- type: nauc_mrr_at_1000_max
value: 52.7089709923531
- type: nauc_mrr_at_1000_std
value: 28.8517952674151
- type: nauc_mrr_at_100_diff1
value: 33.1094984027438
- type: nauc_mrr_at_100_max
value: 52.74301398138847
- type: nauc_mrr_at_100_std
value: 28.897997840300892
- type: nauc_mrr_at_10_diff1
value: 33.300713655464925
- type: nauc_mrr_at_10_max
value: 52.572139698742184
- type: nauc_mrr_at_10_std
value: 28.66875615527188
- type: nauc_mrr_at_1_diff1
value: 32.57632582147155
- type: nauc_mrr_at_1_max
value: 46.020072246328816
- type: nauc_mrr_at_1_std
value: 20.99097889820076
- type: nauc_mrr_at_20_diff1
value: 33.04083904518949
- type: nauc_mrr_at_20_max
value: 52.597451362456994
- type: nauc_mrr_at_20_std
value: 28.681527293587898
- type: nauc_mrr_at_3_diff1
value: 33.64864656322754
- type: nauc_mrr_at_3_max
value: 51.82256412011279
- type: nauc_mrr_at_3_std
value: 27.241260746740686
- type: nauc_mrr_at_5_diff1
value: 33.53201325467246
- type: nauc_mrr_at_5_max
value: 52.79440885773516
- type: nauc_mrr_at_5_std
value: 28.663081392086028
- type: nauc_ndcg_at_1000_diff1
value: 28.632650542040714
- type: nauc_ndcg_at_1000_max
value: 51.24103069835822
- type: nauc_ndcg_at_1000_std
value: 35.05503784757999
- type: nauc_ndcg_at_100_diff1
value: 29.082177715298503
- type: nauc_ndcg_at_100_max
value: 45.24750203464315
- type: nauc_ndcg_at_100_std
value: 27.146548925680914
- type: nauc_ndcg_at_10_diff1
value: 25.123554466093594
- type: nauc_ndcg_at_10_max
value: 42.74355537806512
- type: nauc_ndcg_at_10_std
value: 22.234407997803935
- type: nauc_ndcg_at_1_diff1
value: 33.75083940012058
- type: nauc_ndcg_at_1_max
value: 44.44319402133161
- type: nauc_ndcg_at_1_std
value: 19.146499358406487
- type: nauc_ndcg_at_20_diff1
value: 24.954207968331872
- type: nauc_ndcg_at_20_max
value: 41.25991844405748
- type: nauc_ndcg_at_20_std
value: 22.169009285868864
- type: nauc_ndcg_at_3_diff1
value: 28.186539942033516
- type: nauc_ndcg_at_3_max
value: 44.40790009754965
- type: nauc_ndcg_at_3_std
value: 20.99226576085115
- type: nauc_ndcg_at_5_diff1
value: 25.498387899376706
- type: nauc_ndcg_at_5_max
value: 43.174709766261316
- type: nauc_ndcg_at_5_std
value: 21.88111962672031
- type: nauc_precision_at_1000_diff1
value: -16.22321012507648
- type: nauc_precision_at_1000_max
value: 5.808852256649677
- type: nauc_precision_at_1000_std
value: 19.875641776698824
- type: nauc_precision_at_100_diff1
value: -10.248089374355486
- type: nauc_precision_at_100_max
value: 19.29065415127588
- type: nauc_precision_at_100_std
value: 31.75019665627339
- type: nauc_precision_at_10_diff1
value: 3.6783257583955056
- type: nauc_precision_at_10_max
value: 39.22286010695767
- type: nauc_precision_at_10_std
value: 31.225485732801022
- type: nauc_precision_at_1_diff1
value: 32.57632582147155
- type: nauc_precision_at_1_max
value: 46.020072246328816
- type: nauc_precision_at_1_std
value: 20.99097889820076
- type: nauc_precision_at_20_diff1
value: -3.1632510833242784
- type: nauc_precision_at_20_max
value: 31.575496762405734
- type: nauc_precision_at_20_std
value: 31.576283324468115
- type: nauc_precision_at_3_diff1
value: 17.78864585545647
- type: nauc_precision_at_3_max
value: 44.201289661125585
- type: nauc_precision_at_3_std
value: 25.447840649726693
- type: nauc_precision_at_5_diff1
value: 9.986748662091358
- type: nauc_precision_at_5_max
value: 41.214164860776755
- type: nauc_precision_at_5_std
value: 28.22551704127726
- type: nauc_recall_at_1000_diff1
value: 10.984331766850506
- type: nauc_recall_at_1000_max
value: 24.641216018034104
- type: nauc_recall_at_1000_std
value: 26.91064221008446
- type: nauc_recall_at_100_diff1
value: 23.7009352078473
- type: nauc_recall_at_100_max
value: 30.176031609451297
- type: nauc_recall_at_100_std
value: 20.360365243211564
- type: nauc_recall_at_10_diff1
value: 28.11831737650638
- type: nauc_recall_at_10_max
value: 24.21539670487414
- type: nauc_recall_at_10_std
value: 2.245504974150148
- type: nauc_recall_at_1_diff1
value: 54.30208426149679
- type: nauc_recall_at_1_max
value: 18.69125148481608
- type: nauc_recall_at_1_std
value: -8.970955660291802
- type: nauc_recall_at_20_diff1
value: 26.199425305139908
- type: nauc_recall_at_20_max
value: 24.66704097503736
- type: nauc_recall_at_20_std
value: 5.86052107206246
- type: nauc_recall_at_3_diff1
value: 42.88348677575622
- type: nauc_recall_at_3_max
value: 21.189371077603308
- type: nauc_recall_at_3_std
value: -4.537510127238226
- type: nauc_recall_at_5_diff1
value: 30.7936756722569
- type: nauc_recall_at_5_max
value: 21.06136406164962
- type: nauc_recall_at_5_std
value: -1.4113804735229794
- type: ndcg_at_1
value: 45.975
- type: ndcg_at_10
value: 36.201
- type: ndcg_at_100
value: 32.736
- type: ndcg_at_1000
value: 41.099000000000004
- type: ndcg_at_20
value: 33.724
- type: ndcg_at_3
value: 42.242000000000004
- type: ndcg_at_5
value: 40.137
- type: precision_at_1
value: 47.678
- type: precision_at_10
value: 26.904
- type: precision_at_100
value: 8.368
- type: precision_at_1000
value: 2.078
- type: precision_at_20
value: 19.845
- type: precision_at_3
value: 40.351
- type: precision_at_5
value: 35.108
- type: recall_at_1
value: 5.654
- type: recall_at_10
value: 17.793
- type: recall_at_100
value: 32.483000000000004
- type: recall_at_1000
value: 63.294
- type: recall_at_20
value: 21.754
- type: recall_at_3
value: 10.771
- type: recall_at_5
value: 14.084
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: main_score
value: 62.464
- type: map_at_1
value: 38.0
- type: map_at_10
value: 54.806
- type: map_at_100
value: 55.599
- type: map_at_1000
value: 55.617000000000004
- type: map_at_20
value: 55.336
- type: map_at_3
value: 50.58200000000001
- type: map_at_5
value: 53.181
- type: mrr_at_1
value: 42.46813441483198
- type: mrr_at_10
value: 57.060710147326446
- type: mrr_at_100
value: 57.60978373431328
- type: mrr_at_1000
value: 57.62192762809547
- type: mrr_at_20
value: 57.43431796174232
- type: mrr_at_3
value: 53.78041714947835
- type: mrr_at_5
value: 55.81257242178437
- type: nauc_map_at_1000_diff1
value: 38.337572188308194
- type: nauc_map_at_1000_max
value: 27.550035254787197
- type: nauc_map_at_1000_std
value: -7.5513729587308145
- type: nauc_map_at_100_diff1
value: 38.335337794455015
- type: nauc_map_at_100_max
value: 27.56919614414171
- type: nauc_map_at_100_std
value: -7.526017855405723
- type: nauc_map_at_10_diff1
value: 38.308131361353816
- type: nauc_map_at_10_max
value: 27.691849580929933
- type: nauc_map_at_10_std
value: -7.971461731555123
- type: nauc_map_at_1_diff1
value: 42.721072690634884
- type: nauc_map_at_1_max
value: 21.750451486885332
- type: nauc_map_at_1_std
value: -9.99540950522643
- type: nauc_map_at_20_diff1
value: 38.25792874982169
- type: nauc_map_at_20_max
value: 27.68877906159661
- type: nauc_map_at_20_std
value: -7.560753583212102
- type: nauc_map_at_3_diff1
value: 37.950570055936254
- type: nauc_map_at_3_max
value: 26.257969511794858
- type: nauc_map_at_3_std
value: -9.236868658300553
- type: nauc_map_at_5_diff1
value: 37.99893219450212
- type: nauc_map_at_5_max
value: 27.293454259158057
- type: nauc_map_at_5_std
value: -8.734089449603806
- type: nauc_mrr_at_1000_diff1
value: 37.777767467474774
- type: nauc_mrr_at_1000_max
value: 27.39507603748298
- type: nauc_mrr_at_1000_std
value: -5.554754076870114
- type: nauc_mrr_at_100_diff1
value: 37.77981674583538
- type: nauc_mrr_at_100_max
value: 27.411100989441557
- type: nauc_mrr_at_100_std
value: -5.539061231412731
- type: nauc_mrr_at_10_diff1
value: 37.72399003363479
- type: nauc_mrr_at_10_max
value: 27.618142546685416
- type: nauc_mrr_at_10_std
value: -5.6819843907448195
- type: nauc_mrr_at_1_diff1
value: 41.17596078958236
- type: nauc_mrr_at_1_max
value: 23.32588591818617
- type: nauc_mrr_at_1_std
value: -7.126628034623689
- type: nauc_mrr_at_20_diff1
value: 37.695136721588
- type: nauc_mrr_at_20_max
value: 27.52850676467322
- type: nauc_mrr_at_20_std
value: -5.50667995515647
- type: nauc_mrr_at_3_diff1
value: 37.23845700908964
- type: nauc_mrr_at_3_max
value: 26.69389772971012
- type: nauc_mrr_at_3_std
value: -6.31868405989011
- type: nauc_mrr_at_5_diff1
value: 37.33757394192838
- type: nauc_mrr_at_5_max
value: 27.42091593836207
- type: nauc_mrr_at_5_std
value: -5.993243330132065
- type: nauc_ndcg_at_1000_diff1
value: 37.74836061640332
- type: nauc_ndcg_at_1000_max
value: 29.03148916289089
- type: nauc_ndcg_at_1000_std
value: -5.543065770074502
- type: nauc_ndcg_at_100_diff1
value: 37.75593955089626
- type: nauc_ndcg_at_100_max
value: 29.67109480272493
- type: nauc_ndcg_at_100_std
value: -4.773697596687493
- type: nauc_ndcg_at_10_diff1
value: 37.41701174824348
- type: nauc_ndcg_at_10_max
value: 30.448703434043445
- type: nauc_ndcg_at_10_std
value: -6.306202666419071
- type: nauc_ndcg_at_1_diff1
value: 41.17596078958236
- type: nauc_ndcg_at_1_max
value: 23.32588591818617
- type: nauc_ndcg_at_1_std
value: -7.126628034623689
- type: nauc_ndcg_at_20_diff1
value: 37.17445197824622
- type: nauc_ndcg_at_20_max
value: 30.47378561555209
- type: nauc_ndcg_at_20_std
value: -4.921584853993488
- type: nauc_ndcg_at_3_diff1
value: 36.5261976812068
- type: nauc_ndcg_at_3_max
value: 27.560538820208926
- type: nauc_ndcg_at_3_std
value: -8.556686332882931
- type: nauc_ndcg_at_5_diff1
value: 36.571462759614526
- type: nauc_ndcg_at_5_max
value: 29.363401730752585
- type: nauc_ndcg_at_5_std
value: -7.825739170420347
- type: nauc_precision_at_1000_diff1
value: -12.588899483401223
- type: nauc_precision_at_1000_max
value: 2.641097890578701
- type: nauc_precision_at_1000_std
value: 17.643107625788748
- type: nauc_precision_at_100_diff1
value: -8.40579874206785
- type: nauc_precision_at_100_max
value: 9.725496771040037
- type: nauc_precision_at_100_std
value: 21.558582760191243
- type: nauc_precision_at_10_diff1
value: 6.619157191854486
- type: nauc_precision_at_10_max
value: 23.767406373688402
- type: nauc_precision_at_10_std
value: 10.428535003478808
- type: nauc_precision_at_1_diff1
value: 41.17596078958236
- type: nauc_precision_at_1_max
value: 23.32588591818617
- type: nauc_precision_at_1_std
value: -7.126628034623689
- type: nauc_precision_at_20_diff1
value: -0.6449974218292859
- type: nauc_precision_at_20_max
value: 20.211503851418783
- type: nauc_precision_at_20_std
value: 17.922745410142575
- type: nauc_precision_at_3_diff1
value: 19.710276097428657
- type: nauc_precision_at_3_max
value: 26.768918044758706
- type: nauc_precision_at_3_std
value: -1.0636448912049246
- type: nauc_precision_at_5_diff1
value: 13.073181337982613
- type: nauc_precision_at_5_max
value: 26.418340338971024
- type: nauc_precision_at_5_std
value: 2.9842078949528688
- type: nauc_recall_at_1000_diff1
value: 30.52411148739828
- type: nauc_recall_at_1000_max
value: 90.96409807536762
- type: nauc_recall_at_1000_std
value: 83.94857830921949
- type: nauc_recall_at_100_diff1
value: 36.936303690592155
- type: nauc_recall_at_100_max
value: 71.91515014325869
- type: nauc_recall_at_100_std
value: 48.93061263403371
- type: nauc_recall_at_10_diff1
value: 32.84292362076269
- type: nauc_recall_at_10_max
value: 44.27252783122478
- type: nauc_recall_at_10_std
value: -1.5981198975612385
- type: nauc_recall_at_1_diff1
value: 42.721072690634884
- type: nauc_recall_at_1_max
value: 21.750451486885332
- type: nauc_recall_at_1_std
value: -9.99540950522643
- type: nauc_recall_at_20_diff1
value: 29.36724417081702
- type: nauc_recall_at_20_max
value: 52.035846390214715
- type: nauc_recall_at_20_std
value: 11.967264191332818
- type: nauc_recall_at_3_diff1
value: 31.634923771936098
- type: nauc_recall_at_3_max
value: 30.225743369869473
- type: nauc_recall_at_3_std
value: -9.253665347118615
- type: nauc_recall_at_5_diff1
value: 30.66271853090737
- type: nauc_recall_at_5_max
value: 35.70815715994996
- type: nauc_recall_at_5_std
value: -7.836012956078996
- type: ndcg_at_1
value: 42.468
- type: ndcg_at_10
value: 62.464
- type: ndcg_at_100
value: 65.618
- type: ndcg_at_1000
value: 66.014
- type: ndcg_at_20
value: 64.12
- type: ndcg_at_3
value: 54.790000000000006
- type: ndcg_at_5
value: 58.992
- type: precision_at_1
value: 42.468
- type: precision_at_10
value: 9.959
- type: precision_at_100
value: 1.174
- type: precision_at_1000
value: 0.121
- type: precision_at_20
value: 5.380999999999999
- type: precision_at_3
value: 24.73
- type: precision_at_5
value: 17.299999999999997
- type: recall_at_1
value: 38.0
- type: recall_at_10
value: 83.22699999999999
- type: recall_at_100
value: 96.584
- type: recall_at_1000
value: 99.512
- type: recall_at_20
value: 89.291
- type: recall_at_3
value: 63.666
- type: recall_at_5
value: 73.27900000000001
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: main_score
value: 87.366
- type: map_at_1
value: 69.95700000000001
- type: map_at_10
value: 83.55
- type: map_at_100
value: 84.196
- type: map_at_1000
value: 84.21600000000001
- type: map_at_20
value: 83.982
- type: map_at_3
value: 80.647
- type: map_at_5
value: 82.443
- type: mrr_at_1
value: 80.39
- type: mrr_at_10
value: 86.65646031746004
- type: mrr_at_100
value: 86.7852113210373
- type: mrr_at_1000
value: 86.78651118354796
- type: mrr_at_20
value: 86.75772838878498
- type: mrr_at_3
value: 85.67499999999971
- type: mrr_at_5
value: 86.33749999999962
- type: nauc_map_at_1000_diff1
value: 76.68189702770007
- type: nauc_map_at_1000_max
value: 36.19988239025682
- type: nauc_map_at_1000_std
value: -26.231691135645736
- type: nauc_map_at_100_diff1
value: 76.68832712120171
- type: nauc_map_at_100_max
value: 36.18627717337547
- type: nauc_map_at_100_std
value: -26.28243886166
- type: nauc_map_at_10_diff1
value: 76.88888516032657
- type: nauc_map_at_10_max
value: 35.69809861085124
- type: nauc_map_at_10_std
value: -27.859425473864224
- type: nauc_map_at_1_diff1
value: 79.5243725217315
- type: nauc_map_at_1_max
value: 27.092773841207002
- type: nauc_map_at_1_std
value: -26.223200911204543
- type: nauc_map_at_20_diff1
value: 76.74938996155176
- type: nauc_map_at_20_max
value: 36.07373781351406
- type: nauc_map_at_20_std
value: -26.891400098628015
- type: nauc_map_at_3_diff1
value: 77.29604745045076
- type: nauc_map_at_3_max
value: 33.11431059356283
- type: nauc_map_at_3_std
value: -29.555237195931085
- type: nauc_map_at_5_diff1
value: 77.14069217901078
- type: nauc_map_at_5_max
value: 34.68656073526487
- type: nauc_map_at_5_std
value: -28.945053669861508
- type: nauc_mrr_at_1000_diff1
value: 76.66087451567746
- type: nauc_mrr_at_1000_max
value: 38.78133177265328
- type: nauc_mrr_at_1000_std
value: -23.75726541774991
- type: nauc_mrr_at_100_diff1
value: 76.66117078261013
- type: nauc_mrr_at_100_max
value: 38.782533036423885
- type: nauc_mrr_at_100_std
value: -23.752587601473568
- type: nauc_mrr_at_10_diff1
value: 76.65866401411019
- type: nauc_mrr_at_10_max
value: 38.87950311049704
- type: nauc_mrr_at_10_std
value: -23.873660706680578
- type: nauc_mrr_at_1_diff1
value: 77.42633506487041
- type: nauc_mrr_at_1_max
value: 37.93973722217786
- type: nauc_mrr_at_1_std
value: -23.3984130771317
- type: nauc_mrr_at_20_diff1
value: 76.66210684923414
- type: nauc_mrr_at_20_max
value: 38.81293033048911
- type: nauc_mrr_at_20_std
value: -23.736590746133736
- type: nauc_mrr_at_3_diff1
value: 76.33711764736019
- type: nauc_mrr_at_3_max
value: 38.5659231830368
- type: nauc_mrr_at_3_std
value: -23.99588149124865
- type: nauc_mrr_at_5_diff1
value: 76.57123830226054
- type: nauc_mrr_at_5_max
value: 38.97947097392977
- type: nauc_mrr_at_5_std
value: -23.943668957974246
- type: nauc_ndcg_at_1000_diff1
value: 76.38447339050585
- type: nauc_ndcg_at_1000_max
value: 37.756822792877934
- type: nauc_ndcg_at_1000_std
value: -24.046995734357164
- type: nauc_ndcg_at_100_diff1
value: 76.44058018066822
- type: nauc_ndcg_at_100_max
value: 37.72948294169218
- type: nauc_ndcg_at_100_std
value: -24.083432140741795
- type: nauc_ndcg_at_10_diff1
value: 76.56246287923074
- type: nauc_ndcg_at_10_max
value: 37.0329253490553
- type: nauc_ndcg_at_10_std
value: -26.6495163705961
- type: nauc_ndcg_at_1_diff1
value: 77.4085129990432
- type: nauc_ndcg_at_1_max
value: 38.06139172214421
- type: nauc_ndcg_at_1_std
value: -23.656477126977386
- type: nauc_ndcg_at_20_diff1
value: 76.50192496743098
- type: nauc_ndcg_at_20_max
value: 37.51759311013985
- type: nauc_ndcg_at_20_std
value: -25.45517058360004
- type: nauc_ndcg_at_3_diff1
value: 75.94398494081794
- type: nauc_ndcg_at_3_max
value: 35.7666711547279
- type: nauc_ndcg_at_3_std
value: -26.866022682361578
- type: nauc_ndcg_at_5_diff1
value: 76.47334274088344
- type: nauc_ndcg_at_5_max
value: 36.40830331490731
- type: nauc_ndcg_at_5_std
value: -27.170121189572765
- type: nauc_precision_at_1000_diff1
value: -43.33672630765437
- type: nauc_precision_at_1000_max
value: -5.089751329149161
- type: nauc_precision_at_1000_std
value: 30.6241447847051
- type: nauc_precision_at_100_diff1
value: -42.736833035629864
- type: nauc_precision_at_100_max
value: -4.060198408346224
- type: nauc_precision_at_100_std
value: 29.807050266205344
- type: nauc_precision_at_10_diff1
value: -35.90810562245906
- type: nauc_precision_at_10_max
value: 1.1633204529249133
- type: nauc_precision_at_10_std
value: 20.129691203276018
- type: nauc_precision_at_1_diff1
value: 77.4085129990432
- type: nauc_precision_at_1_max
value: 38.06139172214421
- type: nauc_precision_at_1_std
value: -23.656477126977386
- type: nauc_precision_at_20_diff1
value: -40.2132286912738
- type: nauc_precision_at_20_max
value: -1.3004735030734194
- type: nauc_precision_at_20_std
value: 25.15612293757488
- type: nauc_precision_at_3_diff1
value: -13.873825299883904
- type: nauc_precision_at_3_max
value: 11.038689278907233
- type: nauc_precision_at_3_std
value: 5.4276449621706
- type: nauc_precision_at_5_diff1
value: -27.151668633894737
- type: nauc_precision_at_5_max
value: 5.795130010163115
- type: nauc_precision_at_5_std
value: 13.220722167587375
- type: nauc_recall_at_1000_diff1
value: 83.903950427863
- type: nauc_recall_at_1000_max
value: 37.82919000897223
- type: nauc_recall_at_1000_std
value: 70.65670846771707
- type: nauc_recall_at_100_diff1
value: 75.23306095335836
- type: nauc_recall_at_100_max
value: 37.54281648247423
- type: nauc_recall_at_100_std
value: 8.434289114377373
- type: nauc_recall_at_10_diff1
value: 72.7872912723047
- type: nauc_recall_at_10_max
value: 34.261519652104184
- type: nauc_recall_at_10_std
value: -34.60101950810808
- type: nauc_recall_at_1_diff1
value: 79.5243725217315
- type: nauc_recall_at_1_max
value: 27.092773841207002
- type: nauc_recall_at_1_std
value: -26.223200911204543
- type: nauc_recall_at_20_diff1
value: 72.8297963091964
- type: nauc_recall_at_20_max
value: 36.070220569670916
- type: nauc_recall_at_20_std
value: -27.20897179168245
- type: nauc_recall_at_3_diff1
value: 73.47456374650459
- type: nauc_recall_at_3_max
value: 29.901663407294816
- type: nauc_recall_at_3_std
value: -32.83329537040381
- type: nauc_recall_at_5_diff1
value: 73.05025750827126
- type: nauc_recall_at_5_max
value: 32.35733470860963
- type: nauc_recall_at_5_std
value: -34.32357558493091
- type: ndcg_at_1
value: 80.4
- type: ndcg_at_10
value: 87.366
- type: ndcg_at_100
value: 88.7
- type: ndcg_at_1000
value: 88.842
- type: ndcg_at_20
value: 88.11
- type: ndcg_at_3
value: 84.52499999999999
- type: ndcg_at_5
value: 86.047
- type: precision_at_1
value: 80.4
- type: precision_at_10
value: 13.235
- type: precision_at_100
value: 1.516
- type: precision_at_1000
value: 0.156
- type: precision_at_20
value: 7.037
- type: precision_at_3
value: 36.9
- type: precision_at_5
value: 24.236
- type: recall_at_1
value: 69.95700000000001
- type: recall_at_10
value: 94.535
- type: recall_at_100
value: 99.164
- type: recall_at_1000
value: 99.855
- type: recall_at_20
value: 96.974
- type: recall_at_3
value: 86.33800000000001
- type: recall_at_5
value: 90.69
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: main_score
value: 21.492
- type: map_at_1
value: 5.192
- type: map_at_10
value: 12.959000000000001
- type: map_at_100
value: 14.963999999999999
- type: map_at_1000
value: 15.261
- type: map_at_20
value: 13.988999999999999
- type: map_at_3
value: 9.235
- type: map_at_5
value: 11.042
- type: mrr_at_1
value: 25.5
- type: mrr_at_10
value: 36.37313492063491
- type: mrr_at_100
value: 37.36517957347626
- type: mrr_at_1000
value: 37.42538601073437
- type: mrr_at_20
value: 36.987896404421136
- type: mrr_at_3
value: 32.966666666666654
- type: mrr_at_5
value: 34.95166666666664
- type: nauc_map_at_1000_diff1
value: 13.635120934154395
- type: nauc_map_at_1000_max
value: 28.03542983005195
- type: nauc_map_at_1000_std
value: 17.07156940311778
- type: nauc_map_at_100_diff1
value: 13.59237295184475
- type: nauc_map_at_100_max
value: 27.992291365051237
- type: nauc_map_at_100_std
value: 16.926533467400464
- type: nauc_map_at_10_diff1
value: 14.149193235999993
- type: nauc_map_at_10_max
value: 26.520643811139305
- type: nauc_map_at_10_std
value: 13.168673602548925
- type: nauc_map_at_1_diff1
value: 20.096094508148465
- type: nauc_map_at_1_max
value: 17.41582245576302
- type: nauc_map_at_1_std
value: 5.771729007558897
- type: nauc_map_at_20_diff1
value: 13.977726400526427
- type: nauc_map_at_20_max
value: 27.2322235491895
- type: nauc_map_at_20_std
value: 14.972781677750435
- type: nauc_map_at_3_diff1
value: 17.371153027460355
- type: nauc_map_at_3_max
value: 24.457758503208254
- type: nauc_map_at_3_std
value: 7.719726821179824
- type: nauc_map_at_5_diff1
value: 14.600442843442574
- type: nauc_map_at_5_max
value: 25.899736370856296
- type: nauc_map_at_5_std
value: 10.125349354853359
- type: nauc_mrr_at_1000_diff1
value: 18.70342821390236
- type: nauc_mrr_at_1000_max
value: 23.365194520549114
- type: nauc_mrr_at_1000_std
value: 12.185114294903236
- type: nauc_mrr_at_100_diff1
value: 18.677858738015907
- type: nauc_mrr_at_100_max
value: 23.372641996726742
- type: nauc_mrr_at_100_std
value: 12.216130561991909
- type: nauc_mrr_at_10_diff1
value: 18.79094453090232
- type: nauc_mrr_at_10_max
value: 23.511686337006466
- type: nauc_mrr_at_10_std
value: 11.879716687008134
- type: nauc_mrr_at_1_diff1
value: 20.10455171810408
- type: nauc_mrr_at_1_max
value: 17.741566234315428
- type: nauc_mrr_at_1_std
value: 6.1676764583652215
- type: nauc_mrr_at_20_diff1
value: 18.70143648544655
- type: nauc_mrr_at_20_max
value: 23.45603239095019
- type: nauc_mrr_at_20_std
value: 12.244613576686202
- type: nauc_mrr_at_3_diff1
value: 18.894662528857374
- type: nauc_mrr_at_3_max
value: 23.3739038101588
- type: nauc_mrr_at_3_std
value: 10.4709044796543
- type: nauc_mrr_at_5_diff1
value: 18.877786065095563
- type: nauc_mrr_at_5_max
value: 23.78061081203872
- type: nauc_mrr_at_5_std
value: 11.847882917869622
- type: nauc_ndcg_at_1000_diff1
value: 13.99159027398115
- type: nauc_ndcg_at_1000_max
value: 29.44766808611483
- type: nauc_ndcg_at_1000_std
value: 24.289749574699915
- type: nauc_ndcg_at_100_diff1
value: 13.164020363258746
- type: nauc_ndcg_at_100_max
value: 29.642442997167723
- type: nauc_ndcg_at_100_std
value: 23.761764515453866
- type: nauc_ndcg_at_10_diff1
value: 14.839883268638546
- type: nauc_ndcg_at_10_max
value: 27.21043708455449
- type: nauc_ndcg_at_10_std
value: 15.56110419291775
- type: nauc_ndcg_at_1_diff1
value: 20.10455171810408
- type: nauc_ndcg_at_1_max
value: 17.741566234315428
- type: nauc_ndcg_at_1_std
value: 6.1676764583652215
- type: nauc_ndcg_at_20_diff1
value: 14.27998110295395
- type: nauc_ndcg_at_20_max
value: 28.2492026337839
- type: nauc_ndcg_at_20_std
value: 18.822356982979105
- type: nauc_ndcg_at_3_diff1
value: 17.659263157535445
- type: nauc_ndcg_at_3_max
value: 25.416706421591396
- type: nauc_ndcg_at_3_std
value: 9.650689638152636
- type: nauc_ndcg_at_5_diff1
value: 15.38459833918123
- type: nauc_ndcg_at_5_max
value: 26.92495519416969
- type: nauc_ndcg_at_5_std
value: 12.71017696809276
- type: nauc_precision_at_1000_diff1
value: 6.128490135458364
- type: nauc_precision_at_1000_max
value: 23.52693893261883
- type: nauc_precision_at_1000_std
value: 36.280432732819925
- type: nauc_precision_at_100_diff1
value: 5.306163791220436
- type: nauc_precision_at_100_max
value: 27.67851033239246
- type: nauc_precision_at_100_std
value: 34.29821573752515
- type: nauc_precision_at_10_diff1
value: 10.829686435425472
- type: nauc_precision_at_10_max
value: 27.201648684015318
- type: nauc_precision_at_10_std
value: 19.376999508233254
- type: nauc_precision_at_1_diff1
value: 20.10455171810408
- type: nauc_precision_at_1_max
value: 17.741566234315428
- type: nauc_precision_at_1_std
value: 6.1676764583652215
- type: nauc_precision_at_20_diff1
value: 9.416169626702048
- type: nauc_precision_at_20_max
value: 27.65257998670333
- type: nauc_precision_at_20_std
value: 24.761868509805826
- type: nauc_precision_at_3_diff1
value: 16.666456902017348
- type: nauc_precision_at_3_max
value: 27.9969730961105
- type: nauc_precision_at_3_std
value: 10.991562741393231
- type: nauc_precision_at_5_diff1
value: 12.26205064462843
- type: nauc_precision_at_5_max
value: 29.083848730874095
- type: nauc_precision_at_5_std
value: 15.66630836555747
- type: nauc_recall_at_1000_diff1
value: 5.600277836894063
- type: nauc_recall_at_1000_max
value: 23.228705161815526
- type: nauc_recall_at_1000_std
value: 36.822431061799485
- type: nauc_recall_at_100_diff1
value: 4.991781244867178
- type: nauc_recall_at_100_max
value: 27.70095625483475
- type: nauc_recall_at_100_std
value: 34.67168431597854
- type: nauc_recall_at_10_diff1
value: 10.580860425931972
- type: nauc_recall_at_10_max
value: 27.145829414223666
- type: nauc_recall_at_10_std
value: 19.330630157067382
- type: nauc_recall_at_1_diff1
value: 20.096094508148465
- type: nauc_recall_at_1_max
value: 17.41582245576302
- type: nauc_recall_at_1_std
value: 5.771729007558897
- type: nauc_recall_at_20_diff1
value: 9.06945331260344
- type: nauc_recall_at_20_max
value: 27.56725251066482
- type: nauc_recall_at_20_std
value: 24.77644509886098
- type: nauc_recall_at_3_diff1
value: 16.660507676429322
- type: nauc_recall_at_3_max
value: 27.816546386536434
- type: nauc_recall_at_3_std
value: 10.687824478247007
- type: nauc_recall_at_5_diff1
value: 11.992514446369388
- type: nauc_recall_at_5_max
value: 28.789031176671948
- type: nauc_recall_at_5_std
value: 15.422118990090805
- type: ndcg_at_1
value: 25.5
- type: ndcg_at_10
value: 21.492
- type: ndcg_at_100
value: 29.022
- type: ndcg_at_1000
value: 34.298
- type: ndcg_at_20
value: 24.237000000000002
- type: ndcg_at_3
value: 20.392
- type: ndcg_at_5
value: 17.801000000000002
- type: precision_at_1
value: 25.5
- type: precision_at_10
value: 11.09
- type: precision_at_100
value: 2.1919999999999997
- type: precision_at_1000
value: 0.346
- type: precision_at_20
value: 7.135
- type: precision_at_3
value: 18.933
- type: precision_at_5
value: 15.52
- type: recall_at_1
value: 5.192
- type: recall_at_10
value: 22.512999999999998
- type: recall_at_100
value: 44.505
- type: recall_at_1000
value: 70.267
- type: recall_at_20
value: 28.965000000000003
- type: recall_at_3
value: 11.522
- type: recall_at_5
value: 15.751999999999999
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: main_score
value: 71.586
- type: map_at_1
value: 56.760999999999996
- type: map_at_10
value: 66.893
- type: map_at_100
value: 67.42
- type: map_at_1000
value: 67.44200000000001
- type: map_at_20
value: 67.232
- type: map_at_3
value: 64.193
- type: map_at_5
value: 65.73400000000001
- type: mrr_at_1
value: 60.0
- type: mrr_at_10
value: 68.20383597883595
- type: mrr_at_100
value: 68.58867453733343
- type: mrr_at_1000
value: 68.61117469977329
- type: mrr_at_20
value: 68.43973740684265
- type: mrr_at_3
value: 66.11111111111111
- type: mrr_at_5
value: 67.44444444444446
- type: nauc_map_at_1000_diff1
value: 72.66688261123035
- type: nauc_map_at_1000_max
value: 61.02926282006283
- type: nauc_map_at_1000_std
value: 11.084549829740526
- type: nauc_map_at_100_diff1
value: 72.66226192320828
- type: nauc_map_at_100_max
value: 61.04393223108811
- type: nauc_map_at_100_std
value: 11.101529343291695
- type: nauc_map_at_10_diff1
value: 72.66732266693091
- type: nauc_map_at_10_max
value: 61.24124296311832
- type: nauc_map_at_10_std
value: 10.91179451961794
- type: nauc_map_at_1_diff1
value: 74.2356464256346
- type: nauc_map_at_1_max
value: 54.06962758957632
- type: nauc_map_at_1_std
value: 0.8037891907963532
- type: nauc_map_at_20_diff1
value: 72.65198594061253
- type: nauc_map_at_20_max
value: 61.130159351448185
- type: nauc_map_at_20_std
value: 11.2246899245522
- type: nauc_map_at_3_diff1
value: 72.78578673303954
- type: nauc_map_at_3_max
value: 59.19073262936321
- type: nauc_map_at_3_std
value: 8.460301560522968
- type: nauc_map_at_5_diff1
value: 72.55004168261968
- type: nauc_map_at_5_max
value: 59.75181935082357
- type: nauc_map_at_5_std
value: 9.440299527201889
- type: nauc_mrr_at_1000_diff1
value: 72.82720348470325
- type: nauc_mrr_at_1000_max
value: 62.344231223741446
- type: nauc_mrr_at_1000_std
value: 12.60196558488974
- type: nauc_mrr_at_100_diff1
value: 72.82236849255094
- type: nauc_mrr_at_100_max
value: 62.35799491393125
- type: nauc_mrr_at_100_std
value: 12.617900773655673
- type: nauc_mrr_at_10_diff1
value: 72.7722847495086
- type: nauc_mrr_at_10_max
value: 62.66642401155435
- type: nauc_mrr_at_10_std
value: 12.906381237738746
- type: nauc_mrr_at_1_diff1
value: 74.71208073612343
- type: nauc_mrr_at_1_max
value: 59.50430394775893
- type: nauc_mrr_at_1_std
value: 8.129514198080512
- type: nauc_mrr_at_20_diff1
value: 72.78312367361772
- type: nauc_mrr_at_20_max
value: 62.421122493761885
- type: nauc_mrr_at_20_std
value: 12.693437522498588
- type: nauc_mrr_at_3_diff1
value: 73.50670156385345
- type: nauc_mrr_at_3_max
value: 62.01717537699209
- type: nauc_mrr_at_3_std
value: 11.926548252191182
- type: nauc_mrr_at_5_diff1
value: 72.62204028549876
- type: nauc_mrr_at_5_max
value: 62.319358766312085
- type: nauc_mrr_at_5_std
value: 13.081257923284342
- type: nauc_ndcg_at_1000_diff1
value: 72.29960539074736
- type: nauc_ndcg_at_1000_max
value: 62.75096959221402
- type: nauc_ndcg_at_1000_std
value: 13.81528462505362
- type: nauc_ndcg_at_100_diff1
value: 72.19985782073529
- type: nauc_ndcg_at_100_max
value: 63.18837705326287
- type: nauc_ndcg_at_100_std
value: 14.506479655117138
- type: nauc_ndcg_at_10_diff1
value: 71.85759847832983
- type: nauc_ndcg_at_10_max
value: 64.150996056865
- type: nauc_ndcg_at_10_std
value: 14.580606901634278
- type: nauc_ndcg_at_1_diff1
value: 74.71208073612343
- type: nauc_ndcg_at_1_max
value: 59.50430394775893
- type: nauc_ndcg_at_1_std
value: 8.129514198080512
- type: nauc_ndcg_at_20_diff1
value: 71.80987178228351
- type: nauc_ndcg_at_20_max
value: 63.56269460865743
- type: nauc_ndcg_at_20_std
value: 15.024978004625922
- type: nauc_ndcg_at_3_diff1
value: 72.35095651602592
- type: nauc_ndcg_at_3_max
value: 61.60548011855679
- type: nauc_ndcg_at_3_std
value: 12.048248788835263
- type: nauc_ndcg_at_5_diff1
value: 71.48615621881864
- type: nauc_ndcg_at_5_max
value: 61.72870035979784
- type: nauc_ndcg_at_5_std
value: 12.83048357446691
- type: nauc_precision_at_1000_diff1
value: -14.743011420972
- type: nauc_precision_at_1000_max
value: 19.281995763080158
- type: nauc_precision_at_1000_std
value: 49.6140660398164
- type: nauc_precision_at_100_diff1
value: 0.11278174806205563
- type: nauc_precision_at_100_max
value: 29.704511820077332
- type: nauc_precision_at_100_std
value: 47.84916954122579
- type: nauc_precision_at_10_diff1
value: 20.498227967235728
- type: nauc_precision_at_10_max
value: 47.883119365891595
- type: nauc_precision_at_10_std
value: 45.182178693450595
- type: nauc_precision_at_1_diff1
value: 74.71208073612343
- type: nauc_precision_at_1_max
value: 59.50430394775893
- type: nauc_precision_at_1_std
value: 8.129514198080512
- type: nauc_precision_at_20_diff1
value: 12.551737222341455
- type: nauc_precision_at_20_max
value: 40.618899501225634
- type: nauc_precision_at_20_std
value: 48.5598454249067
- type: nauc_precision_at_3_diff1
value: 47.67720764601145
- type: nauc_precision_at_3_max
value: 56.50632017305064
- type: nauc_precision_at_3_std
value: 31.14175140162157
- type: nauc_precision_at_5_diff1
value: 35.10058622792819
- type: nauc_precision_at_5_max
value: 51.88948872657981
- type: nauc_precision_at_5_std
value: 37.62796957461928
- type: nauc_recall_at_1000_diff1
value: 79.57516339869238
- type: nauc_recall_at_1000_max
value: 86.11111111111035
- type: nauc_recall_at_1000_std
value: 79.57516339869238
- type: nauc_recall_at_100_diff1
value: 70.50859559510081
- type: nauc_recall_at_100_max
value: 79.17009941231396
- type: nauc_recall_at_100_std
value: 44.32910419069595
- type: nauc_recall_at_10_diff1
value: 66.16118569361245
- type: nauc_recall_at_10_max
value: 74.73542948302286
- type: nauc_recall_at_10_std
value: 27.680330939810037
- type: nauc_recall_at_1_diff1
value: 74.2356464256346
- type: nauc_recall_at_1_max
value: 54.06962758957632
- type: nauc_recall_at_1_std
value: 0.8037891907963532
- type: nauc_recall_at_20_diff1
value: 65.4748436545527
- type: nauc_recall_at_20_max
value: 73.81532199081235
- type: nauc_recall_at_20_std
value: 33.59324708196253
- type: nauc_recall_at_3_diff1
value: 68.83194804473622
- type: nauc_recall_at_3_max
value: 61.77722610439669
- type: nauc_recall_at_3_std
value: 13.984923756556714
- type: nauc_recall_at_5_diff1
value: 65.51467417209523
- type: nauc_recall_at_5_max
value: 64.08276291427661
- type: nauc_recall_at_5_std
value: 19.976472037847167
- type: ndcg_at_1
value: 60.0
- type: ndcg_at_10
value: 71.586
- type: ndcg_at_100
value: 73.76899999999999
- type: ndcg_at_1000
value: 74.386
- type: ndcg_at_20
value: 72.612
- type: ndcg_at_3
value: 66.944
- type: ndcg_at_5
value: 69.333
- type: precision_at_1
value: 60.0
- type: precision_at_10
value: 9.6
- type: precision_at_100
value: 1.073
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_20
value: 5.033
- type: precision_at_3
value: 26.333000000000002
- type: precision_at_5
value: 17.4
- type: recall_at_1
value: 56.760999999999996
- type: recall_at_10
value: 84.589
- type: recall_at_100
value: 94.333
- type: recall_at_1000
value: 99.333
- type: recall_at_20
value: 88.43299999999999
- type: recall_at_3
value: 72.10600000000001
- type: recall_at_5
value: 78.194
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: main_score
value: 84.60600000000001
- type: map_at_1
value: 0.257
- type: map_at_10
value: 2.196
- type: map_at_100
value: 13.252
- type: map_at_1000
value: 31.473000000000003
- type: map_at_20
value: 4.023000000000001
- type: map_at_3
value: 0.722
- type: map_at_5
value: 1.146
- type: mrr_at_1
value: 94.0
- type: mrr_at_10
value: 97.0
- type: mrr_at_100
value: 97.0
- type: mrr_at_1000
value: 97.0
- type: mrr_at_20
value: 97.0
- type: mrr_at_3
value: 97.0
- type: mrr_at_5
value: 97.0
- type: nauc_map_at_1000_diff1
value: -30.674816554207062
- type: nauc_map_at_1000_max
value: 53.18598689657068
- type: nauc_map_at_1000_std
value: 78.88325309469121
- type: nauc_map_at_100_diff1
value: -17.6877824653978
- type: nauc_map_at_100_max
value: 19.584159765315658
- type: nauc_map_at_100_std
value: 48.051154190992726
- type: nauc_map_at_10_diff1
value: 20.076631089898626
- type: nauc_map_at_10_max
value: -8.642556160185636
- type: nauc_map_at_10_std
value: -5.768698617334298
- type: nauc_map_at_1_diff1
value: 27.342260509653798
- type: nauc_map_at_1_max
value: -23.400451210297994
- type: nauc_map_at_1_std
value: -21.152006353733853
- type: nauc_map_at_20_diff1
value: 8.019321726240506
- type: nauc_map_at_20_max
value: -1.4826378210544222
- type: nauc_map_at_20_std
value: 5.698208117745366
- type: nauc_map_at_3_diff1
value: 32.073377946749446
- type: nauc_map_at_3_max
value: -13.099353983204654
- type: nauc_map_at_3_std
value: -15.36319127398037
- type: nauc_map_at_5_diff1
value: 22.500045815797876
- type: nauc_map_at_5_max
value: -8.548135411428023
- type: nauc_map_at_5_std
value: -8.547850460331334
- type: nauc_mrr_at_1000_diff1
value: -6.022408963585526
- type: nauc_mrr_at_1000_max
value: 4.481792717087155
- type: nauc_mrr_at_1000_std
value: 51.6962340491753
- type: nauc_mrr_at_100_diff1
value: -6.022408963585526
- type: nauc_mrr_at_100_max
value: 4.481792717087155
- type: nauc_mrr_at_100_std
value: 51.6962340491753
- type: nauc_mrr_at_10_diff1
value: -6.022408963585526
- type: nauc_mrr_at_10_max
value: 4.481792717087155
- type: nauc_mrr_at_10_std
value: 51.6962340491753
- type: nauc_mrr_at_1_diff1
value: -6.022408963585076
- type: nauc_mrr_at_1_max
value: 4.481792717087146
- type: nauc_mrr_at_1_std
value: 51.69623404917518
- type: nauc_mrr_at_20_diff1
value: -6.022408963585526
- type: nauc_mrr_at_20_max
value: 4.481792717087155
- type: nauc_mrr_at_20_std
value: 51.6962340491753
- type: nauc_mrr_at_3_diff1
value: -6.022408963585526
- type: nauc_mrr_at_3_max
value: 4.481792717087155
- type: nauc_mrr_at_3_std
value: 51.6962340491753
- type: nauc_mrr_at_5_diff1
value: -6.022408963585526
- type: nauc_mrr_at_5_max
value: 4.481792717087155
- type: nauc_mrr_at_5_std
value: 51.6962340491753
- type: nauc_ndcg_at_1000_diff1
value: -20.79697283984295
- type: nauc_ndcg_at_1000_max
value: 52.97671908009218
- type: nauc_ndcg_at_1000_std
value: 75.43907707019758
- type: nauc_ndcg_at_100_diff1
value: -38.620752706946455
- type: nauc_ndcg_at_100_max
value: 49.41307462381511
- type: nauc_ndcg_at_100_std
value: 81.33299379244252
- type: nauc_ndcg_at_10_diff1
value: -18.611906363037356
- type: nauc_ndcg_at_10_max
value: 44.20544651664479
- type: nauc_ndcg_at_10_std
value: 61.322552829935816
- type: nauc_ndcg_at_1_diff1
value: 18.625935567849073
- type: nauc_ndcg_at_1_max
value: -10.104132769280879
- type: nauc_ndcg_at_1_std
value: 22.449560689879743
- type: nauc_ndcg_at_20_diff1
value: -30.61130208138771
- type: nauc_ndcg_at_20_max
value: 52.68851710375231
- type: nauc_ndcg_at_20_std
value: 69.72357683382992
- type: nauc_ndcg_at_3_diff1
value: 5.695394821691213
- type: nauc_ndcg_at_3_max
value: 37.909122367102135
- type: nauc_ndcg_at_3_std
value: 46.2366603255159
- type: nauc_ndcg_at_5_diff1
value: -15.273067832464731
- type: nauc_ndcg_at_5_max
value: 49.7054639475091
- type: nauc_ndcg_at_5_std
value: 58.83754007826166
- type: nauc_precision_at_1000_diff1
value: -31.565302588492035
- type: nauc_precision_at_1000_max
value: 52.56214379514724
- type: nauc_precision_at_1000_std
value: 53.40618234326055
- type: nauc_precision_at_100_diff1
value: -44.67273120709088
- type: nauc_precision_at_100_max
value: 48.30381155522576
- type: nauc_precision_at_100_std
value: 82.1984661602578
- type: nauc_precision_at_10_diff1
value: -24.737383556860145
- type: nauc_precision_at_10_max
value: 52.816815002878556
- type: nauc_precision_at_10_std
value: 67.99052410030845
- type: nauc_precision_at_1_diff1
value: -6.022408963585076
- type: nauc_precision_at_1_max
value: 4.481792717087146
- type: nauc_precision_at_1_std
value: 51.69623404917518
- type: nauc_precision_at_20_diff1
value: -40.23628054967093
- type: nauc_precision_at_20_max
value: 56.980056980057014
- type: nauc_precision_at_20_std
value: 76.60976777785895
- type: nauc_precision_at_3_diff1
value: -4.661784068466279
- type: nauc_precision_at_3_max
value: 59.052007899934125
- type: nauc_precision_at_3_std
value: 58.187952600394986
- type: nauc_precision_at_5_diff1
value: -38.11848143512736
- type: nauc_precision_at_5_max
value: 68.6149353358365
- type: nauc_precision_at_5_std
value: 73.55652899457661
- type: nauc_recall_at_1000_diff1
value: -14.886527444436345
- type: nauc_recall_at_1000_max
value: 48.07492302795808
- type: nauc_recall_at_1000_std
value: 65.05623212485906
- type: nauc_recall_at_100_diff1
value: -8.148385729388195
- type: nauc_recall_at_100_max
value: 8.041615364614533
- type: nauc_recall_at_100_std
value: 33.77187914574611
- type: nauc_recall_at_10_diff1
value: 24.333628413035942
- type: nauc_recall_at_10_max
value: -14.577877145192078
- type: nauc_recall_at_10_std
value: -12.131819145098557
- type: nauc_recall_at_1_diff1
value: 27.342260509653798
- type: nauc_recall_at_1_max
value: -23.400451210297994
- type: nauc_recall_at_1_std
value: -21.152006353733853
- type: nauc_recall_at_20_diff1
value: 13.695556376785564
- type: nauc_recall_at_20_max
value: -8.872009346408264
- type: nauc_recall_at_20_std
value: -3.163199444247112
- type: nauc_recall_at_3_diff1
value: 32.00442538217753
- type: nauc_recall_at_3_max
value: -15.159737942664552
- type: nauc_recall_at_3_std
value: -17.530833132440645
- type: nauc_recall_at_5_diff1
value: 22.64740552912405
- type: nauc_recall_at_5_max
value: -12.947090597010414
- type: nauc_recall_at_5_std
value: -12.914478822476807
- type: ndcg_at_1
value: 88.0
- type: ndcg_at_10
value: 84.60600000000001
- type: ndcg_at_100
value: 64.31700000000001
- type: ndcg_at_1000
value: 56.40500000000001
- type: ndcg_at_20
value: 80.561
- type: ndcg_at_3
value: 87.87700000000001
- type: ndcg_at_5
value: 86.641
- type: precision_at_1
value: 94.0
- type: precision_at_10
value: 88.2
- type: precision_at_100
value: 65.9
- type: precision_at_1000
value: 25.019999999999996
- type: precision_at_20
value: 84.7
- type: precision_at_3
value: 92.0
- type: precision_at_5
value: 90.0
- type: recall_at_1
value: 0.257
- type: recall_at_10
value: 2.338
- type: recall_at_100
value: 15.831999999999999
- type: recall_at_1000
value: 52.519000000000005
- type: recall_at_20
value: 4.367
- type: recall_at_3
value: 0.74
- type: recall_at_5
value: 1.196
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: main_score
value: 31.426
- type: map_at_1
value: 3.4709999999999996
- type: map_at_10
value: 13.236999999999998
- type: map_at_100
value: 19.521
- type: map_at_1000
value: 21.224
- type: map_at_20
value: 15.626000000000001
- type: map_at_3
value: 7.152
- type: map_at_5
value: 9.914000000000001
- type: mrr_at_1
value: 44.89795918367347
- type: mrr_at_10
value: 57.54373177842565
- type: mrr_at_100
value: 57.855267710139536
- type: mrr_at_1000
value: 57.855267710139536
- type: mrr_at_20
value: 57.70071764969724
- type: mrr_at_3
value: 52.72108843537414
- type: mrr_at_5
value: 55.06802721088435
- type: nauc_map_at_1000_diff1
value: 21.148857552115558
- type: nauc_map_at_1000_max
value: 2.0837572569021323
- type: nauc_map_at_1000_std
value: 3.203419709665347
- type: nauc_map_at_100_diff1
value: 21.383778167597878
- type: nauc_map_at_100_max
value: 0.965767943155967
- type: nauc_map_at_100_std
value: 0.3949924961020957
- type: nauc_map_at_10_diff1
value: 27.178555638086394
- type: nauc_map_at_10_max
value: 4.480675175857958
- type: nauc_map_at_10_std
value: -13.69553539513878
- type: nauc_map_at_1_diff1
value: 27.63901823865334
- type: nauc_map_at_1_max
value: -18.6387233237763
- type: nauc_map_at_1_std
value: -27.02164241863646
- type: nauc_map_at_20_diff1
value: 23.892104752374888
- type: nauc_map_at_20_max
value: 3.5343136621362348
- type: nauc_map_at_20_std
value: -8.765101188860816
- type: nauc_map_at_3_diff1
value: 22.065793929837493
- type: nauc_map_at_3_max
value: 0.8063396680860568
- type: nauc_map_at_3_std
value: -20.404849396621824
- type: nauc_map_at_5_diff1
value: 22.66626080580714
- type: nauc_map_at_5_max
value: 5.423340658352383
- type: nauc_map_at_5_std
value: -18.31523779843455
- type: nauc_mrr_at_1000_diff1
value: 30.520722269282665
- type: nauc_mrr_at_1000_max
value: -16.644959497742267
- type: nauc_mrr_at_1000_std
value: -16.3824126273053
- type: nauc_mrr_at_100_diff1
value: 30.520722269282665
- type: nauc_mrr_at_100_max
value: -16.644959497742267
- type: nauc_mrr_at_100_std
value: -16.3824126273053
- type: nauc_mrr_at_10_diff1
value: 30.428248939332974
- type: nauc_mrr_at_10_max
value: -16.300183919261585
- type: nauc_mrr_at_10_std
value: -15.404823235836309
- type: nauc_mrr_at_1_diff1
value: 27.041346572613474
- type: nauc_mrr_at_1_max
value: -23.181309312755804
- type: nauc_mrr_at_1_std
value: -24.33076726484014
- type: nauc_mrr_at_20_diff1
value: 30.676558567379303
- type: nauc_mrr_at_20_max
value: -16.914268763031416
- type: nauc_mrr_at_20_std
value: -15.77742854976336
- type: nauc_mrr_at_3_diff1
value: 31.718457109787096
- type: nauc_mrr_at_3_max
value: -15.508391132202235
- type: nauc_mrr_at_3_std
value: -20.33229438349494
- type: nauc_mrr_at_5_diff1
value: 28.73798376227693
- type: nauc_mrr_at_5_max
value: -16.086295031060196
- type: nauc_mrr_at_5_std
value: -15.644604635769321
- type: nauc_ndcg_at_1000_diff1
value: 22.158724660189606
- type: nauc_ndcg_at_1000_max
value: -3.1755686809941475
- type: nauc_ndcg_at_1000_std
value: 19.258386224159075
- type: nauc_ndcg_at_100_diff1
value: 21.83846748649288
- type: nauc_ndcg_at_100_max
value: -10.939957598756036
- type: nauc_ndcg_at_100_std
value: 14.729678880436623
- type: nauc_ndcg_at_10_diff1
value: 26.944882726098424
- type: nauc_ndcg_at_10_max
value: -3.5176483833346617
- type: nauc_ndcg_at_10_std
value: -5.400606773697211
- type: nauc_ndcg_at_1_diff1
value: 26.649410985172985
- type: nauc_ndcg_at_1_max
value: -18.806716526067493
- type: nauc_ndcg_at_1_std
value: -25.100244999343506
- type: nauc_ndcg_at_20_diff1
value: 24.860266153648315
- type: nauc_ndcg_at_20_max
value: -7.521401821712892
- type: nauc_ndcg_at_20_std
value: -3.3696577425983003
- type: nauc_ndcg_at_3_diff1
value: 23.9933326962406
- type: nauc_ndcg_at_3_max
value: -0.4609479344284664
- type: nauc_ndcg_at_3_std
value: -15.176459166869897
- type: nauc_ndcg_at_5_diff1
value: 22.50595978713142
- type: nauc_ndcg_at_5_max
value: -2.1093870656000857
- type: nauc_ndcg_at_5_std
value: -12.732197425528257
- type: nauc_precision_at_1000_diff1
value: -20.335120385950024
- type: nauc_precision_at_1000_max
value: 26.95109729939765
- type: nauc_precision_at_1000_std
value: 29.981685890622117
- type: nauc_precision_at_100_diff1
value: -2.782114329320704
- type: nauc_precision_at_100_max
value: 2.9489322002048604
- type: nauc_precision_at_100_std
value: 67.3074073674319
- type: nauc_precision_at_10_diff1
value: 21.385177180383383
- type: nauc_precision_at_10_max
value: -2.4696365259422817
- type: nauc_precision_at_10_std
value: 14.469784299536673
- type: nauc_precision_at_1_diff1
value: 27.041346572613474
- type: nauc_precision_at_1_max
value: -23.181309312755804
- type: nauc_precision_at_1_std
value: -24.33076726484014
- type: nauc_precision_at_20_diff1
value: 11.993846579997673
- type: nauc_precision_at_20_max
value: -2.4792189693296227
- type: nauc_precision_at_20_std
value: 28.581394687807745
- type: nauc_precision_at_3_diff1
value: 20.70568446328836
- type: nauc_precision_at_3_max
value: 0.37326398699875984
- type: nauc_precision_at_3_std
value: -12.983918676694389
- type: nauc_precision_at_5_diff1
value: 19.47466335828124
- type: nauc_precision_at_5_max
value: -1.8921617684385994
- type: nauc_precision_at_5_std
value: -6.533875294402164
- type: nauc_recall_at_1000_diff1
value: 7.611201305723156
- type: nauc_recall_at_1000_max
value: 5.6416194035820055
- type: nauc_recall_at_1000_std
value: 61.695208644278
- type: nauc_recall_at_100_diff1
value: 10.0183258158735
- type: nauc_recall_at_100_max
value: -10.950612455698973
- type: nauc_recall_at_100_std
value: 33.06069987640471
- type: nauc_recall_at_10_diff1
value: 24.738210305731535
- type: nauc_recall_at_10_max
value: -2.6592454032071546
- type: nauc_recall_at_10_std
value: -4.83987517793115
- type: nauc_recall_at_1_diff1
value: 27.63901823865334
- type: nauc_recall_at_1_max
value: -18.6387233237763
- type: nauc_recall_at_1_std
value: -27.02164241863646
- type: nauc_recall_at_20_diff1
value: 17.79601177409034
- type: nauc_recall_at_20_max
value: -6.681637093148051
- type: nauc_recall_at_20_std
value: 3.369193919932238
- type: nauc_recall_at_3_diff1
value: 24.9589431081204
- type: nauc_recall_at_3_max
value: 2.4783640980500232
- type: nauc_recall_at_3_std
value: -19.567415651090702
- type: nauc_recall_at_5_diff1
value: 23.71803410135437
- type: nauc_recall_at_5_max
value: 1.6294309357641652
- type: nauc_recall_at_5_std
value: -15.365511906408983
- type: ndcg_at_1
value: 40.816
- type: ndcg_at_10
value: 31.426
- type: ndcg_at_100
value: 41.558
- type: ndcg_at_1000
value: 53.042
- type: ndcg_at_20
value: 31.108999999999998
- type: ndcg_at_3
value: 35.518
- type: ndcg_at_5
value: 33.235
- type: precision_at_1
value: 44.897999999999996
- type: precision_at_10
value: 27.551
- type: precision_at_100
value: 8.204
- type: precision_at_1000
value: 1.582
- type: precision_at_20
value: 19.796
- type: precision_at_3
value: 36.735
- type: precision_at_5
value: 33.061
- type: recall_at_1
value: 3.4709999999999996
- type: recall_at_10
value: 19.563
- type: recall_at_100
value: 50.3
- type: recall_at_1000
value: 85.13199999999999
- type: recall_at_20
value: 26.738
- type: recall_at_3
value: 7.8420000000000005
- type: recall_at_5
value: 11.994
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 68.29850746268657
- type: ap
value: 30.109785890841966
- type: ap_weighted
value: 30.109785890841966
- type: f1
value: 61.76875915202924
- type: f1_weighted
value: 71.32073190458556
- type: main_score
value: 68.29850746268657
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification (default)
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 90.3068
- type: ap
value: 86.17914339624038
- type: ap_weighted
value: 86.17914339624038
- type: f1
value: 90.29716826358077
- type: f1_weighted
value: 90.29716826358077
- type: main_score
value: 90.3068
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 46.272000000000006
- type: f1
value: 45.57042543386915
- type: f1_weighted
value: 45.57042543386915
- type: main_score
value: 46.272000000000006
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P (default)
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: main_score
value: 44.9469238081379
- type: v_measure
value: 44.9469238081379
- type: v_measure_std
value: 13.26811262671461
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S (default)
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: main_score
value: 34.12071448053325
- type: v_measure
value: 34.12071448053325
- type: v_measure_std
value: 13.7019879046405
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions (default)
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: main_score
value: 61.597667288657846
- type: map
value: 61.597667288657846
- type: mrr
value: 75.57940904893813
- type: nAUC_map_diff1
value: 8.745172077340095
- type: nAUC_map_max
value: 20.114863024035493
- type: nAUC_map_std
value: 15.991351189572192
- type: nAUC_mrr_diff1
value: 20.781369244159983
- type: nAUC_mrr_max
value: 30.78542570228559
- type: nAUC_mrr_std
value: 19.861484857303676
- task:
type: STS
dataset:
name: MTEB BIOSSES (default)
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cosine_pearson
value: 88.55587996301419
- type: cosine_spearman
value: 86.40317357420093
- type: euclidean_pearson
value: 86.93771958250231
- type: euclidean_spearman
value: 86.40317357420093
- type: main_score
value: 86.40317357420093
- type: manhattan_pearson
value: 86.92196577117366
- type: manhattan_spearman
value: 85.79834051556095
- type: pearson
value: 88.55587996301419
- type: spearman
value: 86.40317357420093
- task:
type: Classification
dataset:
name: MTEB Banking77Classification (default)
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 80.0064935064935
- type: f1
value: 79.29524254086299
- type: f1_weighted
value: 79.295242540863
- type: main_score
value: 80.0064935064935
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P (default)
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: main_score
value: 35.27186813341181
- type: v_measure
value: 35.27186813341181
- type: v_measure_std
value: 0.8621482145872432
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S (default)
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: main_score
value: 28.411805064852295
- type: v_measure
value: 28.411805064852295
- type: v_measure_std
value: 0.7194290078011281
- task:
type: Classification
dataset:
name: MTEB EmotionClassification (default)
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 43.675
- type: f1
value: 40.15061931375577
- type: f1_weighted
value: 45.714186572727066
- type: main_score
value: 43.675
- task:
type: Classification
dataset:
name: MTEB ImdbClassification (default)
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 84.35640000000001
- type: ap
value: 79.07507736685174
- type: ap_weighted
value: 79.07507736685174
- type: f1
value: 84.32288494833531
- type: f1_weighted
value: 84.32288494833531
- type: main_score
value: 84.35640000000001
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 91.35658914728684
- type: f1
value: 90.86877537911086
- type: f1_weighted
value: 91.3282092774443
- type: main_score
value: 91.35658914728684
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 60.63611491108071
- type: f1
value: 42.78886482112741
- type: f1_weighted
value: 63.44208631840539
- type: main_score
value: 60.63611491108071
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 66.68796234028245
- type: f1
value: 64.44940791000278
- type: f1_weighted
value: 65.77554417406792
- type: main_score
value: 66.68796234028245
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 73.0598520511096
- type: f1
value: 72.14267273884774
- type: f1_weighted
value: 72.93345180137516
- type: main_score
value: 73.0598520511096
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P (default)
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: main_score
value: 31.143081341699606
- type: v_measure
value: 31.143081341699606
- type: v_measure_std
value: 1.5578716347076906
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S (default)
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: main_score
value: 27.010818869829556
- type: v_measure
value: 27.010818869829556
- type: v_measure_std
value: 1.1771554540819378
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking (default)
type: mteb/mind_small
config: default
split: test
revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
metrics:
- type: main_score
value: 30.20503776754942
- type: map
value: 30.20503776754942
- type: mrr
value: 31.076636002733437
- type: nAUC_map_diff1
value: 7.290568655287842
- type: nAUC_map_max
value: -21.381599355932945
- type: nAUC_map_std
value: -7.709920607543168
- type: nAUC_mrr_diff1
value: 7.558397329284913
- type: nAUC_mrr_max
value: -15.981397186427607
- type: nAUC_mrr_std
value: -4.870495243168834
- task:
type: Clustering
dataset:
name: MTEB RedditClustering (default)
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: main_score
value: 51.85893476633338
- type: v_measure
value: 51.85893476633338
- type: v_measure_std
value: 4.704770139385852
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P (default)
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: main_score
value: 61.8124222918822
- type: v_measure
value: 61.8124222918822
- type: v_measure_std
value: 11.994472578100165
- task:
type: STS
dataset:
name: MTEB SICK-R (default)
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cosine_pearson
value: 77.63310776935984
- type: cosine_spearman
value: 69.86468291111039
- type: euclidean_pearson
value: 73.91537077798837
- type: euclidean_spearman
value: 69.86468376650203
- type: main_score
value: 69.86468291111039
- type: manhattan_pearson
value: 73.68616048370464
- type: manhattan_spearman
value: 69.76232036206659
- type: pearson
value: 77.63310776935984
- type: spearman
value: 69.86468291111039
- task:
type: STS
dataset:
name: MTEB STS12 (default)
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cosine_pearson
value: 57.71716838245049
- type: cosine_spearman
value: 61.797855543446424
- type: euclidean_pearson
value: 58.22958675325848
- type: euclidean_spearman
value: 61.797855543446424
- type: main_score
value: 61.797855543446424
- type: manhattan_pearson
value: 57.63117544997929
- type: manhattan_spearman
value: 61.3629404350085
- type: pearson
value: 57.71716838245049
- type: spearman
value: 61.797855543446424
- task:
type: STS
dataset:
name: MTEB STS13 (default)
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cosine_pearson
value: 82.30260026790903
- type: cosine_spearman
value: 82.66959813070869
- type: euclidean_pearson
value: 82.08383017580783
- type: euclidean_spearman
value: 82.66959813070869
- type: main_score
value: 82.66959813070869
- type: manhattan_pearson
value: 81.77991451392153
- type: manhattan_spearman
value: 82.3652534745606
- type: pearson
value: 82.30260026790903
- type: spearman
value: 82.66959813070869
- task:
type: STS
dataset:
name: MTEB STS14 (default)
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cosine_pearson
value: 71.50608384084478
- type: cosine_spearman
value: 68.94968064977785
- type: euclidean_pearson
value: 70.73381299949564
- type: euclidean_spearman
value: 68.94968064977785
- type: main_score
value: 68.94968064977785
- type: manhattan_pearson
value: 70.5385486953787
- type: manhattan_spearman
value: 68.82132770672365
- type: pearson
value: 71.50608384084478
- type: spearman
value: 68.94968064977785
- task:
type: STS
dataset:
name: MTEB STS15 (default)
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cosine_pearson
value: 73.66969825874907
- type: cosine_spearman
value: 75.55374982088381
- type: euclidean_pearson
value: 75.9339313749594
- type: euclidean_spearman
value: 75.55374982088381
- type: main_score
value: 75.55374982088381
- type: manhattan_pearson
value: 75.88287553383817
- type: manhattan_spearman
value: 75.50729812977688
- type: pearson
value: 73.66969825874907
- type: spearman
value: 75.55374982088381
- task:
type: STS
dataset:
name: MTEB STS16 (default)
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cosine_pearson
value: 74.5954724414016
- type: cosine_spearman
value: 77.2688820850505
- type: euclidean_pearson
value: 77.19866353971555
- type: euclidean_spearman
value: 77.2688820850505
- type: main_score
value: 77.2688820850505
- type: manhattan_pearson
value: 77.27072603680978
- type: manhattan_spearman
value: 77.29408453673607
- type: pearson
value: 74.5954724414016
- type: spearman
value: 77.2688820850505
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 71.52588722654055
- type: cosine_spearman
value: 74.97235736456061
- type: euclidean_pearson
value: 74.51952528854038
- type: euclidean_spearman
value: 74.97235736456061
- type: main_score
value: 74.97235736456061
- type: manhattan_pearson
value: 74.48272300884209
- type: manhattan_spearman
value: 74.80633649415176
- type: pearson
value: 71.52588722654055
- type: spearman
value: 74.97235736456061
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 68.80031120401976
- type: cosine_spearman
value: 69.07945196478491
- type: euclidean_pearson
value: 68.99674496430792
- type: euclidean_spearman
value: 69.07945196478491
- type: main_score
value: 69.07945196478491
- type: manhattan_pearson
value: 69.00236107775687
- type: manhattan_spearman
value: 68.98064879049272
- type: pearson
value: 68.80031120401976
- type: spearman
value: 69.07945196478491
- task:
type: STS
dataset:
name: MTEB STSBenchmark (default)
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cosine_pearson
value: 65.6898007230089
- type: cosine_spearman
value: 69.72386211803668
- type: euclidean_pearson
value: 69.04523003701475
- type: euclidean_spearman
value: 69.72386211803668
- type: main_score
value: 69.72386211803668
- type: manhattan_pearson
value: 68.80479743770702
- type: manhattan_spearman
value: 69.43264575177459
- type: pearson
value: 65.6898007230089
- type: spearman
value: 69.72386211803668
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR (default)
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: main_score
value: 79.74088066874383
- type: map
value: 79.74088066874383
- type: mrr
value: 94.47697455050397
- type: nAUC_map_diff1
value: 8.036086256905502
- type: nAUC_map_max
value: 54.88199803816819
- type: nAUC_map_std
value: 69.16267942176574
- type: nAUC_mrr_diff1
value: 50.020738477678115
- type: nAUC_mrr_max
value: 83.28922770326483
- type: nAUC_mrr_std
value: 83.63973501802224
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions (default)
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cosine_accuracy
value: 99.83861386138614
- type: cosine_accuracy_threshold
value: 74.75666999816895
- type: cosine_ap
value: 96.15132792066652
- type: cosine_f1
value: 91.84890656063618
- type: cosine_f1_threshold
value: 71.70594930648804
- type: cosine_precision
value: 91.30434782608695
- type: cosine_recall
value: 92.4
- type: dot_accuracy
value: 99.83861386138614
- type: dot_accuracy_threshold
value: 74.75666999816895
- type: dot_ap
value: 96.15132792066653
- type: dot_f1
value: 91.84890656063618
- type: dot_f1_threshold
value: 71.70596122741699
- type: dot_precision
value: 91.30434782608695
- type: dot_recall
value: 92.4
- type: euclidean_accuracy
value: 99.83861386138614
- type: euclidean_accuracy_threshold
value: 71.05395793914795
- type: euclidean_ap
value: 96.15132792066652
- type: euclidean_f1
value: 91.84890656063618
- type: euclidean_f1_threshold
value: 75.22505521774292
- type: euclidean_precision
value: 91.30434782608695
- type: euclidean_recall
value: 92.4
- type: main_score
value: 96.15132792066653
- type: manhattan_accuracy
value: 99.83564356435643
- type: manhattan_accuracy_threshold
value: 1547.6950645446777
- type: manhattan_ap
value: 96.06151211452136
- type: manhattan_f1
value: 91.61676646706587
- type: manhattan_f1_threshold
value: 1626.3608932495117
- type: manhattan_precision
value: 91.43426294820716
- type: manhattan_recall
value: 91.8
- type: max_ap
value: 96.15132792066653
- type: max_f1
value: 91.84890656063618
- type: max_precision
value: 91.43426294820716
- type: max_recall
value: 92.4
- type: similarity_accuracy
value: 99.83861386138614
- type: similarity_accuracy_threshold
value: 74.75666999816895
- type: similarity_ap
value: 96.15132792066652
- type: similarity_f1
value: 91.84890656063618
- type: similarity_f1_threshold
value: 71.70594930648804
- type: similarity_precision
value: 91.30434782608695
- type: similarity_recall
value: 92.4
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering (default)
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: main_score
value: 61.24120328328453
- type: v_measure
value: 61.24120328328453
- type: v_measure_std
value: 3.9946560691100372
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P (default)
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: main_score
value: 33.808268374864745
- type: v_measure
value: 33.808268374864745
- type: v_measure_std
value: 1.2212188701887239
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions (default)
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: main_score
value: 52.19806018468037
- type: map
value: 52.19806018468037
- type: mrr
value: 52.98921462524404
- type: nAUC_map_diff1
value: 37.41443156995912
- type: nAUC_map_max
value: 9.410262727675603
- type: nAUC_map_std
value: 8.7094185014992
- type: nAUC_mrr_diff1
value: 37.78202772392581
- type: nAUC_mrr_max
value: 10.517635536565816
- type: nAUC_mrr_std
value: 8.509423813772491
- task:
type: Summarization
dataset:
name: MTEB SummEval (default)
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cosine_pearson
value: 30.48413700430812
- type: cosine_spearman
value: 30.357162200875816
- type: dot_pearson
value: 30.484140144824938
- type: dot_spearman
value: 30.357162200875816
- type: main_score
value: 30.357162200875816
- type: pearson
value: 30.48413700430812
- type: spearman
value: 30.357162200875816
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification (default)
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 66.8359375
- type: ap
value: 12.482653786025985
- type: ap_weighted
value: 12.482653786025985
- type: f1
value: 51.328608527332385
- type: f1_weighted
value: 74.07974463955398
- type: main_score
value: 66.8359375
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification (default)
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 53.907753254103
- type: f1
value: 54.22707647269581
- type: f1_weighted
value: 53.611822984407695
- type: main_score
value: 53.907753254103
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering (default)
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: main_score
value: 38.1364789307295
- type: v_measure
value: 38.1364789307295
- type: v_measure_std
value: 2.0731634966352077
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015 (default)
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cosine_accuracy
value: 82.66674614054956
- type: cosine_accuracy_threshold
value: 79.80123162269592
- type: cosine_ap
value: 63.28209719072804
- type: cosine_f1
value: 60.16389710903711
- type: cosine_f1_threshold
value: 72.22893834114075
- type: cosine_precision
value: 52.90232185748599
- type: cosine_recall
value: 69.73614775725594
- type: dot_accuracy
value: 82.66674614054956
- type: dot_accuracy_threshold
value: 79.8012375831604
- type: dot_ap
value: 63.282103870645166
- type: dot_f1
value: 60.16389710903711
- type: dot_f1_threshold
value: 72.22894430160522
- type: dot_precision
value: 52.90232185748599
- type: dot_recall
value: 69.73614775725594
- type: euclidean_accuracy
value: 82.66674614054956
- type: euclidean_accuracy_threshold
value: 63.55905532836914
- type: euclidean_ap
value: 63.282095399953164
- type: euclidean_f1
value: 60.16389710903711
- type: euclidean_f1_threshold
value: 74.5265781879425
- type: euclidean_precision
value: 52.90232185748599
- type: euclidean_recall
value: 69.73614775725594
- type: main_score
value: 63.282103870645166
- type: manhattan_accuracy
value: 82.74423317637242
- type: manhattan_accuracy_threshold
value: 1415.380859375
- type: manhattan_ap
value: 63.26931757839598
- type: manhattan_f1
value: 60.11014948859166
- type: manhattan_f1_threshold
value: 1632.522201538086
- type: manhattan_precision
value: 52.359506559624045
- type: manhattan_recall
value: 70.55408970976254
- type: max_ap
value: 63.282103870645166
- type: max_f1
value: 60.16389710903711
- type: max_precision
value: 52.90232185748599
- type: max_recall
value: 70.55408970976254
- type: similarity_accuracy
value: 82.66674614054956
- type: similarity_accuracy_threshold
value: 79.80123162269592
- type: similarity_ap
value: 63.28209719072804
- type: similarity_f1
value: 60.16389710903711
- type: similarity_f1_threshold
value: 72.22893834114075
- type: similarity_precision
value: 52.90232185748599
- type: similarity_recall
value: 69.73614775725594
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus (default)
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cosine_accuracy
value: 88.10105949470253
- type: cosine_accuracy_threshold
value: 68.95147562026978
- type: cosine_ap
value: 84.65516103854583
- type: cosine_f1
value: 76.54581123301605
- type: cosine_f1_threshold
value: 63.92929553985596
- type: cosine_precision
value: 72.46526344751685
- type: cosine_recall
value: 81.11333538651063
- type: dot_accuracy
value: 88.10105949470253
- type: dot_accuracy_threshold
value: 68.95147562026978
- type: dot_ap
value: 84.65516301437592
- type: dot_f1
value: 76.54581123301605
- type: dot_f1_threshold
value: 63.92928957939148
- type: dot_precision
value: 72.46526344751685
- type: dot_recall
value: 81.11333538651063
- type: euclidean_accuracy
value: 88.10105949470253
- type: euclidean_accuracy_threshold
value: 78.80169153213501
- type: euclidean_ap
value: 84.65517268264233
- type: euclidean_f1
value: 76.54581123301605
- type: euclidean_f1_threshold
value: 84.93610620498657
- type: euclidean_precision
value: 72.46526344751685
- type: euclidean_recall
value: 81.11333538651063
- type: main_score
value: 84.65517268264233
- type: manhattan_accuracy
value: 88.08941669577366
- type: manhattan_accuracy_threshold
value: 1739.3169403076172
- type: manhattan_ap
value: 84.64592398855694
- type: manhattan_f1
value: 76.62890540443034
- type: manhattan_f1_threshold
value: 1861.344337463379
- type: manhattan_precision
value: 72.09775967413442
- type: manhattan_recall
value: 81.76778564829073
- type: max_ap
value: 84.65517268264233
- type: max_f1
value: 76.62890540443034
- type: max_precision
value: 72.46526344751685
- type: max_recall
value: 81.76778564829073
- type: similarity_accuracy
value: 88.10105949470253
- type: similarity_accuracy_threshold
value: 68.95147562026978
- type: similarity_ap
value: 84.65516103854583
- type: similarity_f1
value: 76.54581123301605
- type: similarity_f1_threshold
value: 63.92929553985596
- type: similarity_precision
value: 72.46526344751685
- type: similarity_recall
value: 81.11333538651063
---
# yixuan-chia/snowflake-arctic-embed-m-v1.5-Q8_0-GGUF
This model was converted to GGUF format from [`Snowflake/snowflake-arctic-embed-m-v1.5`](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v1.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v1.5) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo yixuan-chia/snowflake-arctic-embed-m-v1.5-Q8_0-GGUF --hf-file snowflake-arctic-embed-m-v1.5-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo yixuan-chia/snowflake-arctic-embed-m-v1.5-Q8_0-GGUF --hf-file snowflake-arctic-embed-m-v1.5-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo yixuan-chia/snowflake-arctic-embed-m-v1.5-Q8_0-GGUF --hf-file snowflake-arctic-embed-m-v1.5-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo yixuan-chia/snowflake-arctic-embed-m-v1.5-Q8_0-GGUF --hf-file snowflake-arctic-embed-m-v1.5-q8_0.gguf -c 2048
```
|
RichardErkhov/shleeeee_-_mistral-ko-OpenOrca-Platypus-v2-gguf
|
RichardErkhov
| 2024-08-29T05:31:15Z | 8 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-08-29T02:53:06Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
mistral-ko-OpenOrca-Platypus-v2 - GGUF
- Model creator: https://huggingface.co/shleeeee/
- Original model: https://huggingface.co/shleeeee/mistral-ko-OpenOrca-Platypus-v2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [mistral-ko-OpenOrca-Platypus-v2.Q2_K.gguf](https://huggingface.co/RichardErkhov/shleeeee_-_mistral-ko-OpenOrca-Platypus-v2-gguf/blob/main/mistral-ko-OpenOrca-Platypus-v2.Q2_K.gguf) | Q2_K | 2.53GB |
| [mistral-ko-OpenOrca-Platypus-v2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/shleeeee_-_mistral-ko-OpenOrca-Platypus-v2-gguf/blob/main/mistral-ko-OpenOrca-Platypus-v2.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [mistral-ko-OpenOrca-Platypus-v2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/shleeeee_-_mistral-ko-OpenOrca-Platypus-v2-gguf/blob/main/mistral-ko-OpenOrca-Platypus-v2.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [mistral-ko-OpenOrca-Platypus-v2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/shleeeee_-_mistral-ko-OpenOrca-Platypus-v2-gguf/blob/main/mistral-ko-OpenOrca-Platypus-v2.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [mistral-ko-OpenOrca-Platypus-v2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/shleeeee_-_mistral-ko-OpenOrca-Platypus-v2-gguf/blob/main/mistral-ko-OpenOrca-Platypus-v2.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [mistral-ko-OpenOrca-Platypus-v2.Q3_K.gguf](https://huggingface.co/RichardErkhov/shleeeee_-_mistral-ko-OpenOrca-Platypus-v2-gguf/blob/main/mistral-ko-OpenOrca-Platypus-v2.Q3_K.gguf) | Q3_K | 3.28GB |
| [mistral-ko-OpenOrca-Platypus-v2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/shleeeee_-_mistral-ko-OpenOrca-Platypus-v2-gguf/blob/main/mistral-ko-OpenOrca-Platypus-v2.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [mistral-ko-OpenOrca-Platypus-v2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/shleeeee_-_mistral-ko-OpenOrca-Platypus-v2-gguf/blob/main/mistral-ko-OpenOrca-Platypus-v2.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [mistral-ko-OpenOrca-Platypus-v2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/shleeeee_-_mistral-ko-OpenOrca-Platypus-v2-gguf/blob/main/mistral-ko-OpenOrca-Platypus-v2.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [mistral-ko-OpenOrca-Platypus-v2.Q4_0.gguf](https://huggingface.co/RichardErkhov/shleeeee_-_mistral-ko-OpenOrca-Platypus-v2-gguf/blob/main/mistral-ko-OpenOrca-Platypus-v2.Q4_0.gguf) | Q4_0 | 3.83GB |
| [mistral-ko-OpenOrca-Platypus-v2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/shleeeee_-_mistral-ko-OpenOrca-Platypus-v2-gguf/blob/main/mistral-ko-OpenOrca-Platypus-v2.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [mistral-ko-OpenOrca-Platypus-v2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/shleeeee_-_mistral-ko-OpenOrca-Platypus-v2-gguf/blob/main/mistral-ko-OpenOrca-Platypus-v2.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [mistral-ko-OpenOrca-Platypus-v2.Q4_K.gguf](https://huggingface.co/RichardErkhov/shleeeee_-_mistral-ko-OpenOrca-Platypus-v2-gguf/blob/main/mistral-ko-OpenOrca-Platypus-v2.Q4_K.gguf) | Q4_K | 4.07GB |
| [mistral-ko-OpenOrca-Platypus-v2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/shleeeee_-_mistral-ko-OpenOrca-Platypus-v2-gguf/blob/main/mistral-ko-OpenOrca-Platypus-v2.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [mistral-ko-OpenOrca-Platypus-v2.Q4_1.gguf](https://huggingface.co/RichardErkhov/shleeeee_-_mistral-ko-OpenOrca-Platypus-v2-gguf/blob/main/mistral-ko-OpenOrca-Platypus-v2.Q4_1.gguf) | Q4_1 | 4.24GB |
| [mistral-ko-OpenOrca-Platypus-v2.Q5_0.gguf](https://huggingface.co/RichardErkhov/shleeeee_-_mistral-ko-OpenOrca-Platypus-v2-gguf/blob/main/mistral-ko-OpenOrca-Platypus-v2.Q5_0.gguf) | Q5_0 | 4.65GB |
| [mistral-ko-OpenOrca-Platypus-v2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/shleeeee_-_mistral-ko-OpenOrca-Platypus-v2-gguf/blob/main/mistral-ko-OpenOrca-Platypus-v2.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [mistral-ko-OpenOrca-Platypus-v2.Q5_K.gguf](https://huggingface.co/RichardErkhov/shleeeee_-_mistral-ko-OpenOrca-Platypus-v2-gguf/blob/main/mistral-ko-OpenOrca-Platypus-v2.Q5_K.gguf) | Q5_K | 4.78GB |
| [mistral-ko-OpenOrca-Platypus-v2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/shleeeee_-_mistral-ko-OpenOrca-Platypus-v2-gguf/blob/main/mistral-ko-OpenOrca-Platypus-v2.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [mistral-ko-OpenOrca-Platypus-v2.Q5_1.gguf](https://huggingface.co/RichardErkhov/shleeeee_-_mistral-ko-OpenOrca-Platypus-v2-gguf/blob/main/mistral-ko-OpenOrca-Platypus-v2.Q5_1.gguf) | Q5_1 | 5.07GB |
| [mistral-ko-OpenOrca-Platypus-v2.Q6_K.gguf](https://huggingface.co/RichardErkhov/shleeeee_-_mistral-ko-OpenOrca-Platypus-v2-gguf/blob/main/mistral-ko-OpenOrca-Platypus-v2.Q6_K.gguf) | Q6_K | 5.53GB |
| [mistral-ko-OpenOrca-Platypus-v2.Q8_0.gguf](https://huggingface.co/RichardErkhov/shleeeee_-_mistral-ko-OpenOrca-Platypus-v2-gguf/blob/main/mistral-ko-OpenOrca-Platypus-v2.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: other
language:
- ko
pipeline_tag: text-generation
---
# Model Card for mistral-ko-OpenOrca-Platypus-v2
It is a fine-tuned model using Korean in the mistral-7b model
## Model Details
* **Model Developers** : shleeeee(Seunghyeon Lee) , oopsung(Sungwoo Park)
|
sfulay/zephyr-7b-dpo-full-gpt-low-curriculum
|
sfulay
| 2024-08-29T05:30:23Z | 7 | 0 | null |
[
"safetensors",
"mistral",
"trl",
"dpo",
"generated_from_trainer",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"base_model:finetune:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"region:us"
] | null | 2024-08-29T02:11:55Z |
---
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: zephyr-7b-dpo-full-gpt-low-curriculum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-dpo-full-gpt-low-curriculum
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5229
- Rewards/chosen: -0.8152
- Rewards/rejected: -1.5392
- Rewards/accuracies: 0.7069
- Rewards/margins: 0.7241
- Logps/rejected: -399.5724
- Logps/chosen: -365.5233
- Logits/rejected: 1.4072
- Logits/chosen: 0.3892
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 55
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6558 | 0.1147 | 50 | 0.6455 | 0.0044 | -0.0965 | 0.6810 | 0.1009 | -255.3015 | -283.5690 | -2.4863 | -2.5882 |
| 0.5907 | 0.2294 | 100 | 0.5894 | -0.2321 | -0.5376 | 0.7069 | 0.3055 | -299.4117 | -307.2200 | -2.4655 | -2.5910 |
| 0.5657 | 0.3440 | 150 | 0.5474 | -0.5168 | -1.0293 | 0.7198 | 0.5125 | -348.5750 | -335.6879 | -0.6546 | -1.0350 |
| 0.5303 | 0.4587 | 200 | 0.5414 | -1.0659 | -1.7181 | 0.75 | 0.6522 | -417.4532 | -390.5937 | 0.7246 | 0.0707 |
| 0.5472 | 0.5734 | 250 | 0.5268 | -0.8095 | -1.4718 | 0.7155 | 0.6623 | -392.8294 | -364.9606 | 1.2657 | 0.4213 |
| 0.5517 | 0.6881 | 300 | 0.5284 | -0.8914 | -1.6145 | 0.7112 | 0.7231 | -407.0940 | -373.1438 | 1.3137 | 0.2994 |
| 0.4943 | 0.8028 | 350 | 0.5237 | -0.8328 | -1.5668 | 0.7112 | 0.7339 | -402.3227 | -367.2895 | 1.4252 | 0.4044 |
| 0.5335 | 0.9174 | 400 | 0.5229 | -0.8152 | -1.5392 | 0.7069 | 0.7241 | -399.5724 | -365.5233 | 1.4072 | 0.3892 |
### Framework versions
- Transformers 4.44.0.dev0
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1
|
muscle-memory/opt-125m-boolq-10ep
|
muscle-memory
| 2024-08-29T05:13:05Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-29T05:12:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BoHu370/lgd-old-man
|
BoHu370
| 2024-08-29T05:12:25Z | 29 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"stable-diffusion",
"text-to-image",
"diffusion-models-class",
"dreambooth-hackathon",
"BLACK-MTYH-WUKONG",
"dataset:BoHu370/Land_grandfather",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-08-28T07:52:54Z |
---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- BLACK-MTYH-WUKONG
widget:
- text: a photo of lgd old man playing computer game
datasets:
- BoHu370/Land_grandfather
base_model: CompVis/stable-diffusion-v1-4
pipeline_tag: text-to-image
library_name: diffusers
---
# DreamBooth model for the Land Grandfather concept
This is a Stable Diffusion model fine-tuned on the Lang Grandfather(lgd) concept with DreamBooth.
The dataset is from the game ** Black Myth WUKONG **.
It can be used by modifying the `instance_prompt`: **a photo of lgd old man**, the trigger word is `lgd`
## Usage
```python
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained('BoHu370/lgd-old-man').to('cuda')
name_of_your_concept = 'lgd'
type_of_thing = 'old man'
prompt = f"a photo of {name_of_your_concept} {type_of_thing} playing computer game"
guidance_scale = 5
image = pipe(prompt, guidance_scale=guidance_scale).images[0]
image
```
## Examples




|
Rich-J/subnet29_upload_c02_1
|
Rich-J
| 2024-08-29T04:57:22Z | 34 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-29T04:54:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kudod/roberta-large-ner-ghtk-cs-new-data-seg-3090-29Aug-2
|
Kudod
| 2024-08-29T04:54:24Z | 5 | 0 | null |
[
"safetensors",
"xlm-roberta",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"region:us"
] | null | 2024-08-29T04:24:23Z |
---
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
model-index:
- name: roberta-large-ner-ghtk-cs-new-data-seg-3090-29Aug-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-ner-ghtk-cs-new-data-seg-3090-29Aug-2
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3818
- cmt: {'precision': 0.7333333333333333, 'recall': 0.7857142857142857, 'f1': 0.7586206896551724, 'number': 14}
- Tk: {'precision': 0.4835164835164835, 'recall': 0.3793103448275862, 'f1': 0.42512077294685996, 'number': 116}
- A: {'precision': 0.9557109557109557, 'recall': 0.951276102088167, 'f1': 0.9534883720930232, 'number': 431}
- Gày: {'precision': 0.7073170731707317, 'recall': 0.8529411764705882, 'f1': 0.7733333333333334, 'number': 34}
- Gày trừu tượng: {'precision': 0.8875502008032129, 'recall': 0.9057377049180327, 'f1': 0.896551724137931, 'number': 488}
- Gân hàng: {'precision': 0.85, 'recall': 0.918918918918919, 'f1': 0.8831168831168831, 'number': 37}
- Hương thức thanh toán: {'precision': 1.0, 'recall': 0.8181818181818182, 'f1': 0.9, 'number': 33}
- Hối lượng: {'precision': 0.75, 'recall': 0.46153846153846156, 'f1': 0.5714285714285714, 'number': 13}
- Iền: {'precision': 0.7619047619047619, 'recall': 0.8205128205128205, 'f1': 0.7901234567901233, 'number': 39}
- Iờ: {'precision': 0.7894736842105263, 'recall': 0.7894736842105263, 'f1': 0.7894736842105263, 'number': 38}
- Mail: {'precision': 0.9575971731448764, 'recall': 0.9217687074829932, 'f1': 0.9393414211438476, 'number': 294}
- Ã đơn: {'precision': 0.7782805429864253, 'recall': 0.8472906403940886, 'f1': 0.8113207547169811, 'number': 203}
- Ên người: {'precision': 0.6551724137931034, 'recall': 0.6129032258064516, 'f1': 0.6333333333333333, 'number': 31}
- Đt: {'precision': 0.8796680497925311, 'recall': 0.9658314350797267, 'f1': 0.9207383279044516, 'number': 878}
- Đt trừu tượng: {'precision': 0.8521739130434782, 'recall': 0.8412017167381974, 'f1': 0.8466522678185744, 'number': 233}
- Ơn vị đo: {'precision': 0.6666666666666666, 'recall': 0.7857142857142857, 'f1': 0.721311475409836, 'number': 28}
- Ản phẩm cụ thể: {'precision': 0.8, 'recall': 0.6299212598425197, 'f1': 0.7048458149779735, 'number': 127}
- Ản phẩm trừu tượng: {'precision': 0.7441860465116279, 'recall': 0.7272727272727273, 'f1': 0.735632183908046, 'number': 44}
- Ịa chỉ cụ thể: {'precision': 0.4318181818181818, 'recall': 0.4418604651162791, 'f1': 0.4367816091954023, 'number': 43}
- Ịa chỉ trừu tượng: {'precision': 0.7313432835820896, 'recall': 0.6447368421052632, 'f1': 0.6853146853146853, 'number': 76}
- Overall Precision: 0.8551
- Overall Recall: 0.8666
- Overall F1: 0.8608
- Overall Accuracy: 0.9367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | cmt | Tk | A | Gày | Gày trừu tượng | Gân hàng | Hương thức thanh toán | Hối lượng | Iền | Iờ | Mail | Ã đơn | Ên người | Đt | Đt trừu tượng | Ơn vị đo | Ản phẩm cụ thể | Ản phẩm trừu tượng | Ịa chỉ cụ thể | Ịa chỉ trừu tượng | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.1838 | 1.0 | 735 | 0.2965 | {'precision': 0.7692307692307693, 'recall': 0.7142857142857143, 'f1': 0.7407407407407408, 'number': 14} | {'precision': 0.6530612244897959, 'recall': 0.5517241379310345, 'f1': 0.5981308411214952, 'number': 116} | {'precision': 0.9414519906323185, 'recall': 0.9327146171693735, 'f1': 0.9370629370629371, 'number': 431} | {'precision': 0.7586206896551724, 'recall': 0.6470588235294118, 'f1': 0.6984126984126984, 'number': 34} | {'precision': 0.9230769230769231, 'recall': 0.8360655737704918, 'f1': 0.8774193548387098, 'number': 488} | {'precision': 0.8857142857142857, 'recall': 0.8378378378378378, 'f1': 0.8611111111111112, 'number': 37} | {'precision': 0.7878787878787878, 'recall': 0.7878787878787878, 'f1': 0.7878787878787878, 'number': 33} | {'precision': 0.4117647058823529, 'recall': 0.5384615384615384, 'f1': 0.4666666666666667, 'number': 13} | {'precision': 0.6491228070175439, 'recall': 0.9487179487179487, 'f1': 0.7708333333333334, 'number': 39} | {'precision': 0.5357142857142857, 'recall': 0.7894736842105263, 'f1': 0.6382978723404255, 'number': 38} | {'precision': 0.8724035608308606, 'recall': 1.0, 'f1': 0.9318541996830427, 'number': 294} | {'precision': 0.7751196172248804, 'recall': 0.7980295566502463, 'f1': 0.7864077669902914, 'number': 203} | {'precision': 0.35714285714285715, 'recall': 0.16129032258064516, 'f1': 0.2222222222222222, 'number': 31} | {'precision': 0.7718832891246684, 'recall': 0.9943052391799544, 'f1': 0.8690890990542558, 'number': 878} | {'precision': 0.8578947368421053, 'recall': 0.6995708154506438, 'f1': 0.7706855791962176, 'number': 233} | {'precision': 0.7058823529411765, 'recall': 0.8571428571428571, 'f1': 0.7741935483870968, 'number': 28} | {'precision': 0.7934782608695652, 'recall': 0.5748031496062992, 'f1': 0.6666666666666666, 'number': 127} | {'precision': 0.5172413793103449, 'recall': 0.3409090909090909, 'f1': 0.410958904109589, 'number': 44} | {'precision': 0.4, 'recall': 0.13953488372093023, 'f1': 0.2068965517241379, 'number': 43} | {'precision': 0.7678571428571429, 'recall': 0.5657894736842105, 'f1': 0.6515151515151516, 'number': 76} | 0.8132 | 0.8422 | 0.8274 | 0.9145 |
| 0.1539 | 2.0 | 1470 | 0.2412 | {'precision': 0.5, 'recall': 0.2857142857142857, 'f1': 0.36363636363636365, 'number': 14} | {'precision': 0.6901408450704225, 'recall': 0.4224137931034483, 'f1': 0.5240641711229946, 'number': 116} | {'precision': 0.9265033407572383, 'recall': 0.9651972157772621, 'f1': 0.9454545454545454, 'number': 431} | {'precision': 0.6666666666666666, 'recall': 0.7058823529411765, 'f1': 0.6857142857142857, 'number': 34} | {'precision': 0.9212962962962963, 'recall': 0.8155737704918032, 'f1': 0.8652173913043477, 'number': 488} | {'precision': 0.7647058823529411, 'recall': 0.7027027027027027, 'f1': 0.7323943661971832, 'number': 37} | {'precision': 0.7575757575757576, 'recall': 0.7575757575757576, 'f1': 0.7575757575757576, 'number': 33} | {'precision': 0.625, 'recall': 0.38461538461538464, 'f1': 0.4761904761904762, 'number': 13} | {'precision': 0.8518518518518519, 'recall': 0.5897435897435898, 'f1': 0.6969696969696971, 'number': 39} | {'precision': 0.5789473684210527, 'recall': 0.868421052631579, 'f1': 0.6947368421052632, 'number': 38} | {'precision': 0.9831223628691983, 'recall': 0.7925170068027211, 'f1': 0.8775894538606404, 'number': 294} | {'precision': 0.6182432432432432, 'recall': 0.9014778325123153, 'f1': 0.7334669338677355, 'number': 203} | {'precision': 0.75, 'recall': 0.0967741935483871, 'f1': 0.1714285714285714, 'number': 31} | {'precision': 0.8275529865125241, 'recall': 0.9783599088838268, 'f1': 0.8966597077244258, 'number': 878} | {'precision': 0.8232758620689655, 'recall': 0.8197424892703863, 'f1': 0.8215053763440859, 'number': 233} | {'precision': 0.6666666666666666, 'recall': 0.21428571428571427, 'f1': 0.3243243243243243, 'number': 28} | {'precision': 0.9512195121951219, 'recall': 0.30708661417322836, 'f1': 0.4642857142857143, 'number': 127} | {'precision': 0.782608695652174, 'recall': 0.4090909090909091, 'f1': 0.537313432835821, 'number': 44} | {'precision': 0.4666666666666667, 'recall': 0.32558139534883723, 'f1': 0.3835616438356165, 'number': 43} | {'precision': 0.8823529411764706, 'recall': 0.5921052631578947, 'f1': 0.7086614173228346, 'number': 76} | 0.8325 | 0.8106 | 0.8214 | 0.9143 |
| 0.1085 | 3.0 | 2205 | 0.2522 | {'precision': 0.6923076923076923, 'recall': 0.6428571428571429, 'f1': 0.6666666666666666, 'number': 14} | {'precision': 0.6567164179104478, 'recall': 0.3793103448275862, 'f1': 0.4808743169398907, 'number': 116} | {'precision': 0.9639423076923077, 'recall': 0.9303944315545244, 'f1': 0.9468713105076741, 'number': 431} | {'precision': 0.6122448979591837, 'recall': 0.8823529411764706, 'f1': 0.7228915662650602, 'number': 34} | {'precision': 0.8770161290322581, 'recall': 0.8913934426229508, 'f1': 0.8841463414634145, 'number': 488} | {'precision': 0.8285714285714286, 'recall': 0.7837837837837838, 'f1': 0.8055555555555555, 'number': 37} | {'precision': 0.7272727272727273, 'recall': 0.7272727272727273, 'f1': 0.7272727272727273, 'number': 33} | {'precision': 0.5333333333333333, 'recall': 0.6153846153846154, 'f1': 0.5714285714285715, 'number': 13} | {'precision': 0.7692307692307693, 'recall': 0.7692307692307693, 'f1': 0.7692307692307693, 'number': 39} | {'precision': 0.6875, 'recall': 0.5789473684210527, 'f1': 0.6285714285714286, 'number': 38} | {'precision': 0.8738738738738738, 'recall': 0.9897959183673469, 'f1': 0.9282296650717703, 'number': 294} | {'precision': 0.6838235294117647, 'recall': 0.916256157635468, 'f1': 0.7831578947368422, 'number': 203} | {'precision': 0.43243243243243246, 'recall': 0.5161290322580645, 'f1': 0.47058823529411764, 'number': 31} | {'precision': 0.8061224489795918, 'recall': 0.989749430523918, 'f1': 0.8885480572597136, 'number': 878} | {'precision': 0.9086538461538461, 'recall': 0.8111587982832618, 'f1': 0.8571428571428572, 'number': 233} | {'precision': 0.6333333333333333, 'recall': 0.6785714285714286, 'f1': 0.6551724137931035, 'number': 28} | {'precision': 0.8556701030927835, 'recall': 0.6535433070866141, 'f1': 0.7410714285714285, 'number': 127} | {'precision': 0.7105263157894737, 'recall': 0.6136363636363636, 'f1': 0.6585365853658537, 'number': 44} | {'precision': 0.3783783783783784, 'recall': 0.32558139534883723, 'f1': 0.35000000000000003, 'number': 43} | {'precision': 0.75, 'recall': 0.5921052631578947, 'f1': 0.6617647058823529, 'number': 76} | 0.8186 | 0.8659 | 0.8416 | 0.9268 |
| 0.0976 | 4.0 | 2940 | 0.2768 | {'precision': 0.6666666666666666, 'recall': 0.5714285714285714, 'f1': 0.6153846153846153, 'number': 14} | {'precision': 0.6507936507936508, 'recall': 0.35344827586206895, 'f1': 0.45810055865921795, 'number': 116} | {'precision': 0.9618138424821002, 'recall': 0.9350348027842227, 'f1': 0.9482352941176471, 'number': 431} | {'precision': 0.7222222222222222, 'recall': 0.7647058823529411, 'f1': 0.7428571428571428, 'number': 34} | {'precision': 0.8966942148760331, 'recall': 0.889344262295082, 'f1': 0.8930041152263374, 'number': 488} | {'precision': 0.7777777777777778, 'recall': 0.7567567567567568, 'f1': 0.7671232876712328, 'number': 37} | {'precision': 0.8666666666666667, 'recall': 0.7878787878787878, 'f1': 0.8253968253968254, 'number': 33} | {'precision': 0.75, 'recall': 0.23076923076923078, 'f1': 0.3529411764705882, 'number': 13} | {'precision': 0.7857142857142857, 'recall': 0.8461538461538461, 'f1': 0.8148148148148148, 'number': 39} | {'precision': 0.7368421052631579, 'recall': 0.3684210526315789, 'f1': 0.49122807017543857, 'number': 38} | {'precision': 0.8706624605678234, 'recall': 0.9387755102040817, 'f1': 0.9034369885433716, 'number': 294} | {'precision': 0.756198347107438, 'recall': 0.9014778325123153, 'f1': 0.8224719101123596, 'number': 203} | {'precision': 0.7272727272727273, 'recall': 0.5161290322580645, 'f1': 0.6037735849056604, 'number': 31} | {'precision': 0.8695652173913043, 'recall': 0.979498861047836, 'f1': 0.9212640599892876, 'number': 878} | {'precision': 0.7570422535211268, 'recall': 0.9227467811158798, 'f1': 0.8317214700193424, 'number': 233} | {'precision': 0.6571428571428571, 'recall': 0.8214285714285714, 'f1': 0.73015873015873, 'number': 28} | {'precision': 0.8295454545454546, 'recall': 0.5748031496062992, 'f1': 0.6790697674418605, 'number': 127} | {'precision': 0.6744186046511628, 'recall': 0.6590909090909091, 'f1': 0.6666666666666666, 'number': 44} | {'precision': 0.38235294117647056, 'recall': 0.3023255813953488, 'f1': 0.33766233766233766, 'number': 43} | {'precision': 0.75, 'recall': 0.631578947368421, 'f1': 0.6857142857142857, 'number': 76} | 0.8434 | 0.86 | 0.8516 | 0.9276 |
| 0.0689 | 5.0 | 3675 | 0.2804 | {'precision': 0.5789473684210527, 'recall': 0.7857142857142857, 'f1': 0.6666666666666667, 'number': 14} | {'precision': 0.5371900826446281, 'recall': 0.5603448275862069, 'f1': 0.5485232067510548, 'number': 116} | {'precision': 0.9534883720930233, 'recall': 0.951276102088167, 'f1': 0.9523809523809523, 'number': 431} | {'precision': 0.7, 'recall': 0.8235294117647058, 'f1': 0.7567567567567567, 'number': 34} | {'precision': 0.878727634194831, 'recall': 0.9057377049180327, 'f1': 0.8920282542885973, 'number': 488} | {'precision': 0.8, 'recall': 0.8648648648648649, 'f1': 0.8311688311688312, 'number': 37} | {'precision': 0.896551724137931, 'recall': 0.7878787878787878, 'f1': 0.8387096774193549, 'number': 33} | {'precision': 0.6153846153846154, 'recall': 0.6153846153846154, 'f1': 0.6153846153846154, 'number': 13} | {'precision': 0.7804878048780488, 'recall': 0.8205128205128205, 'f1': 0.8, 'number': 39} | {'precision': 0.6382978723404256, 'recall': 0.7894736842105263, 'f1': 0.7058823529411764, 'number': 38} | {'precision': 0.9628252788104089, 'recall': 0.8809523809523809, 'f1': 0.9200710479573712, 'number': 294} | {'precision': 0.821256038647343, 'recall': 0.8374384236453202, 'f1': 0.8292682926829268, 'number': 203} | {'precision': 0.64, 'recall': 0.5161290322580645, 'f1': 0.5714285714285714, 'number': 31} | {'precision': 0.8674089068825911, 'recall': 0.9760820045558086, 'f1': 0.9185423365487674, 'number': 878} | {'precision': 0.825531914893617, 'recall': 0.8326180257510729, 'f1': 0.8290598290598289, 'number': 233} | {'precision': 0.696969696969697, 'recall': 0.8214285714285714, 'f1': 0.7540983606557378, 'number': 28} | {'precision': 0.7419354838709677, 'recall': 0.5433070866141733, 'f1': 0.6272727272727272, 'number': 127} | {'precision': 0.875, 'recall': 0.6363636363636364, 'f1': 0.7368421052631579, 'number': 44} | {'precision': 0.42857142857142855, 'recall': 0.27906976744186046, 'f1': 0.3380281690140845, 'number': 43} | {'precision': 0.7741935483870968, 'recall': 0.631578947368421, 'f1': 0.6956521739130435, 'number': 76} | 0.8479 | 0.8625 | 0.8552 | 0.9317 |
| 0.0565 | 6.0 | 4410 | 0.2920 | {'precision': 0.7333333333333333, 'recall': 0.7857142857142857, 'f1': 0.7586206896551724, 'number': 14} | {'precision': 0.6, 'recall': 0.5948275862068966, 'f1': 0.5974025974025975, 'number': 116} | {'precision': 0.9686746987951808, 'recall': 0.9327146171693735, 'f1': 0.950354609929078, 'number': 431} | {'precision': 0.6511627906976745, 'recall': 0.8235294117647058, 'f1': 0.7272727272727273, 'number': 34} | {'precision': 0.8782435129740519, 'recall': 0.9016393442622951, 'f1': 0.8897876643073812, 'number': 488} | {'precision': 0.7857142857142857, 'recall': 0.8918918918918919, 'f1': 0.8354430379746834, 'number': 37} | {'precision': 0.8333333333333334, 'recall': 0.7575757575757576, 'f1': 0.7936507936507938, 'number': 33} | {'precision': 0.75, 'recall': 0.46153846153846156, 'f1': 0.5714285714285714, 'number': 13} | {'precision': 0.7659574468085106, 'recall': 0.9230769230769231, 'f1': 0.8372093023255814, 'number': 39} | {'precision': 0.6808510638297872, 'recall': 0.8421052631578947, 'f1': 0.7529411764705882, 'number': 38} | {'precision': 0.8881987577639752, 'recall': 0.9727891156462585, 'f1': 0.9285714285714285, 'number': 294} | {'precision': 0.7427385892116183, 'recall': 0.8817733990147784, 'f1': 0.8063063063063063, 'number': 203} | {'precision': 0.6, 'recall': 0.6774193548387096, 'f1': 0.6363636363636364, 'number': 31} | {'precision': 0.8424657534246576, 'recall': 0.9806378132118451, 'f1': 0.9063157894736841, 'number': 878} | {'precision': 0.8170731707317073, 'recall': 0.8626609442060086, 'f1': 0.8392484342379958, 'number': 233} | {'precision': 0.7619047619047619, 'recall': 0.5714285714285714, 'f1': 0.6530612244897959, 'number': 28} | {'precision': 0.7982456140350878, 'recall': 0.7165354330708661, 'f1': 0.7551867219917011, 'number': 127} | {'precision': 0.8108108108108109, 'recall': 0.6818181818181818, 'f1': 0.7407407407407407, 'number': 44} | {'precision': 0.3829787234042553, 'recall': 0.4186046511627907, 'f1': 0.4, 'number': 43} | {'precision': 0.7142857142857143, 'recall': 0.6578947368421053, 'f1': 0.684931506849315, 'number': 76} | 0.8294 | 0.8859 | 0.8568 | 0.9376 |
| 0.0314 | 7.0 | 5145 | 0.3316 | {'precision': 0.6, 'recall': 0.8571428571428571, 'f1': 0.7058823529411764, 'number': 14} | {'precision': 0.5359477124183006, 'recall': 0.7068965517241379, 'f1': 0.6096654275092938, 'number': 116} | {'precision': 0.975, 'recall': 0.9048723897911833, 'f1': 0.9386281588447654, 'number': 431} | {'precision': 0.7, 'recall': 0.8235294117647058, 'f1': 0.7567567567567567, 'number': 34} | {'precision': 0.8949579831932774, 'recall': 0.8729508196721312, 'f1': 0.8838174273858921, 'number': 488} | {'precision': 0.8292682926829268, 'recall': 0.918918918918919, 'f1': 0.8717948717948718, 'number': 37} | {'precision': 0.9, 'recall': 0.8181818181818182, 'f1': 0.8571428571428572, 'number': 33} | {'precision': 0.7, 'recall': 0.5384615384615384, 'f1': 0.608695652173913, 'number': 13} | {'precision': 0.7727272727272727, 'recall': 0.8717948717948718, 'f1': 0.8192771084337349, 'number': 39} | {'precision': 0.8709677419354839, 'recall': 0.7105263157894737, 'f1': 0.782608695652174, 'number': 38} | {'precision': 0.9543859649122807, 'recall': 0.9251700680272109, 'f1': 0.9395509499136442, 'number': 294} | {'precision': 0.76, 'recall': 0.8423645320197044, 'f1': 0.7990654205607477, 'number': 203} | {'precision': 0.696969696969697, 'recall': 0.7419354838709677, 'f1': 0.71875, 'number': 31} | {'precision': 0.8981779206859593, 'recall': 0.9544419134396356, 'f1': 0.9254555494202098, 'number': 878} | {'precision': 0.8396624472573839, 'recall': 0.8540772532188842, 'f1': 0.846808510638298, 'number': 233} | {'precision': 0.6153846153846154, 'recall': 0.8571428571428571, 'f1': 0.7164179104477612, 'number': 28} | {'precision': 0.8217821782178217, 'recall': 0.6535433070866141, 'f1': 0.7280701754385963, 'number': 127} | {'precision': 0.6956521739130435, 'recall': 0.7272727272727273, 'f1': 0.711111111111111, 'number': 44} | {'precision': 0.4186046511627907, 'recall': 0.4186046511627907, 'f1': 0.4186046511627907, 'number': 43} | {'precision': 0.7012987012987013, 'recall': 0.7105263157894737, 'f1': 0.7058823529411765, 'number': 76} | 0.8520 | 0.8691 | 0.8605 | 0.9329 |
| 0.024 | 8.0 | 5880 | 0.3492 | {'precision': 0.625, 'recall': 0.7142857142857143, 'f1': 0.6666666666666666, 'number': 14} | {'precision': 0.5432098765432098, 'recall': 0.3793103448275862, 'f1': 0.4467005076142132, 'number': 116} | {'precision': 0.963855421686747, 'recall': 0.9280742459396751, 'f1': 0.9456264775413711, 'number': 431} | {'precision': 0.7073170731707317, 'recall': 0.8529411764705882, 'f1': 0.7733333333333334, 'number': 34} | {'precision': 0.8821138211382114, 'recall': 0.889344262295082, 'f1': 0.8857142857142857, 'number': 488} | {'precision': 0.868421052631579, 'recall': 0.8918918918918919, 'f1': 0.88, 'number': 37} | {'precision': 1.0, 'recall': 0.8484848484848485, 'f1': 0.9180327868852458, 'number': 33} | {'precision': 0.75, 'recall': 0.46153846153846156, 'f1': 0.5714285714285714, 'number': 13} | {'precision': 0.75, 'recall': 0.8461538461538461, 'f1': 0.7951807228915662, 'number': 39} | {'precision': 0.6904761904761905, 'recall': 0.7631578947368421, 'f1': 0.725, 'number': 38} | {'precision': 0.9644128113879004, 'recall': 0.9217687074829932, 'f1': 0.942608695652174, 'number': 294} | {'precision': 0.8084112149532711, 'recall': 0.8522167487684729, 'f1': 0.829736211031175, 'number': 203} | {'precision': 0.4857142857142857, 'recall': 0.5483870967741935, 'f1': 0.5151515151515151, 'number': 31} | {'precision': 0.8555667001003009, 'recall': 0.9715261958997722, 'f1': 0.9098666666666666, 'number': 878} | {'precision': 0.8565217391304348, 'recall': 0.8454935622317596, 'f1': 0.8509719222462203, 'number': 233} | {'precision': 0.7096774193548387, 'recall': 0.7857142857142857, 'f1': 0.7457627118644068, 'number': 28} | {'precision': 0.8043478260869565, 'recall': 0.5826771653543307, 'f1': 0.6757990867579909, 'number': 127} | {'precision': 0.6842105263157895, 'recall': 0.5909090909090909, 'f1': 0.6341463414634148, 'number': 44} | {'precision': 0.41304347826086957, 'recall': 0.4418604651162791, 'f1': 0.4269662921348315, 'number': 43} | {'precision': 0.7313432835820896, 'recall': 0.6447368421052632, 'f1': 0.6853146853146853, 'number': 76} | 0.8489 | 0.8584 | 0.8536 | 0.9374 |
| 0.0129 | 9.0 | 6615 | 0.3658 | {'precision': 0.7333333333333333, 'recall': 0.7857142857142857, 'f1': 0.7586206896551724, 'number': 14} | {'precision': 0.5104166666666666, 'recall': 0.4224137931034483, 'f1': 0.46226415094339623, 'number': 116} | {'precision': 0.9496567505720824, 'recall': 0.962877030162413, 'f1': 0.956221198156682, 'number': 431} | {'precision': 0.7105263157894737, 'recall': 0.7941176470588235, 'f1': 0.7499999999999999, 'number': 34} | {'precision': 0.8950617283950617, 'recall': 0.8913934426229508, 'f1': 0.8932238193018481, 'number': 488} | {'precision': 0.85, 'recall': 0.918918918918919, 'f1': 0.8831168831168831, 'number': 37} | {'precision': 1.0, 'recall': 0.8181818181818182, 'f1': 0.9, 'number': 33} | {'precision': 0.75, 'recall': 0.46153846153846156, 'f1': 0.5714285714285714, 'number': 13} | {'precision': 0.7619047619047619, 'recall': 0.8205128205128205, 'f1': 0.7901234567901233, 'number': 39} | {'precision': 0.8108108108108109, 'recall': 0.7894736842105263, 'f1': 0.8, 'number': 38} | {'precision': 0.9644128113879004, 'recall': 0.9217687074829932, 'f1': 0.942608695652174, 'number': 294} | {'precision': 0.7952380952380952, 'recall': 0.8226600985221675, 'f1': 0.8087167070217917, 'number': 203} | {'precision': 0.6428571428571429, 'recall': 0.5806451612903226, 'f1': 0.6101694915254238, 'number': 31} | {'precision': 0.9078242229367631, 'recall': 0.9646924829157175, 'f1': 0.9353948094975152, 'number': 878} | {'precision': 0.8596491228070176, 'recall': 0.8412017167381974, 'f1': 0.8503253796095444, 'number': 233} | {'precision': 0.72, 'recall': 0.6428571428571429, 'f1': 0.6792452830188679, 'number': 28} | {'precision': 0.7872340425531915, 'recall': 0.5826771653543307, 'f1': 0.669683257918552, 'number': 127} | {'precision': 0.7619047619047619, 'recall': 0.7272727272727273, 'f1': 0.7441860465116279, 'number': 44} | {'precision': 0.4222222222222222, 'recall': 0.4418604651162791, 'f1': 0.4318181818181818, 'number': 43} | {'precision': 0.7903225806451613, 'recall': 0.6447368421052632, 'f1': 0.7101449275362319, 'number': 76} | 0.8686 | 0.8616 | 0.8651 | 0.9373 |
| 0.0095 | 10.0 | 7350 | 0.3818 | {'precision': 0.7333333333333333, 'recall': 0.7857142857142857, 'f1': 0.7586206896551724, 'number': 14} | {'precision': 0.4835164835164835, 'recall': 0.3793103448275862, 'f1': 0.42512077294685996, 'number': 116} | {'precision': 0.9557109557109557, 'recall': 0.951276102088167, 'f1': 0.9534883720930232, 'number': 431} | {'precision': 0.7073170731707317, 'recall': 0.8529411764705882, 'f1': 0.7733333333333334, 'number': 34} | {'precision': 0.8875502008032129, 'recall': 0.9057377049180327, 'f1': 0.896551724137931, 'number': 488} | {'precision': 0.85, 'recall': 0.918918918918919, 'f1': 0.8831168831168831, 'number': 37} | {'precision': 1.0, 'recall': 0.8181818181818182, 'f1': 0.9, 'number': 33} | {'precision': 0.75, 'recall': 0.46153846153846156, 'f1': 0.5714285714285714, 'number': 13} | {'precision': 0.7619047619047619, 'recall': 0.8205128205128205, 'f1': 0.7901234567901233, 'number': 39} | {'precision': 0.7894736842105263, 'recall': 0.7894736842105263, 'f1': 0.7894736842105263, 'number': 38} | {'precision': 0.9575971731448764, 'recall': 0.9217687074829932, 'f1': 0.9393414211438476, 'number': 294} | {'precision': 0.7782805429864253, 'recall': 0.8472906403940886, 'f1': 0.8113207547169811, 'number': 203} | {'precision': 0.6551724137931034, 'recall': 0.6129032258064516, 'f1': 0.6333333333333333, 'number': 31} | {'precision': 0.8796680497925311, 'recall': 0.9658314350797267, 'f1': 0.9207383279044516, 'number': 878} | {'precision': 0.8521739130434782, 'recall': 0.8412017167381974, 'f1': 0.8466522678185744, 'number': 233} | {'precision': 0.6666666666666666, 'recall': 0.7857142857142857, 'f1': 0.721311475409836, 'number': 28} | {'precision': 0.8, 'recall': 0.6299212598425197, 'f1': 0.7048458149779735, 'number': 127} | {'precision': 0.7441860465116279, 'recall': 0.7272727272727273, 'f1': 0.735632183908046, 'number': 44} | {'precision': 0.4318181818181818, 'recall': 0.4418604651162791, 'f1': 0.4367816091954023, 'number': 43} | {'precision': 0.7313432835820896, 'recall': 0.6447368421052632, 'f1': 0.6853146853146853, 'number': 76} | 0.8551 | 0.8666 | 0.8608 | 0.9367 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.3.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
second-state/OpenChat-3.5-0106-GGUF
|
second-state
| 2024-08-29T04:51:02Z | 223 | 2 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"text-generation",
"openchat",
"base_model:openchat/openchat-3.5-0106",
"base_model:quantized:openchat/openchat-3.5-0106",
"license:apache-2.0",
"autotrain_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-01-10T08:40:25Z |
---
base_model: openchat/openchat-3.5-0106
inference: false
library_name: transformers
license: apache-2.0
model_creator: OpenChat
model_name: Openchat 3.5 0106
model_type: mistral
pipeline_tag: text-generation
quantized_by: Second State Inc.
tags:
- openchat
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# OpenChat-3.5-0106-GGUF
## Original Model
[openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
## Run with LlamaEdge
- LlamaEdge version: [v0.2.8](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.2.8) and above
- Prompt template
- Prompt type: `openchat`
- Prompt string
```text
GPT4 User: {prompt}<|end_of_turn|>GPT4 Assistant:
```
- Reverse prompt: `<|end_of_turn|>`
- Context size: `4096`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:openchat-3.5-0106-Q5_K_M.gguf \
llama-api-server.wasm \
--model-name openchat \
--prompt-template openchat \
--reverse-prompt '<|end_of_turn|>' \
--ctx-size 4096
```
- Run as LlamaEdge command app
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:openchat-3.5-0106-Q5_K_M.gguf \
llama-chat.wasm \
--prompt-template openchat \
--reverse-prompt '<|end_of_turn|>' \
--ctx-size 4096
```
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [openchat-3.5-0106.Q2_K.gguf](https://huggingface.co/second-state/OpenChat-3.5-0106-GGUF/blob/main/openchat-3.5-0106-Q2_K.gguf) | Q2_K | 2 | 3.08 GB| smallest, significant quality loss - not recommended for most purposes |
| [openchat-3.5-0106.Q3_K_L.gguf](https://huggingface.co/second-state/OpenChat-3.5-0106-GGUF/blob/main/openchat-3.5-0106-Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| small, substantial quality loss |
| [openchat-3.5-0106.Q3_K_M.gguf](https://huggingface.co/second-state/OpenChat-3.5-0106-GGUF/blob/main/openchat-3.5-0106-Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| very small, high quality loss |
| [openchat-3.5-0106.Q3_K_S.gguf](https://huggingface.co/second-state/OpenChat-3.5-0106-GGUF/blob/main/openchat-3.5-0106-Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| very small, high quality loss |
| [openchat-3.5-0106.Q4_0.gguf](https://huggingface.co/second-state/OpenChat-3.5-0106-GGUF/blob/main/openchat-3.5-0106-Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [openchat-3.5-0106.Q4_K_M.gguf](https://huggingface.co/second-state/OpenChat-3.5-0106-GGUF/blob/main/openchat-3.5-0106-Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| medium, balanced quality - recommended |
| [openchat-3.5-0106.Q4_K_S.gguf](https://huggingface.co/second-state/OpenChat-3.5-0106-GGUF/blob/main/openchat-3.5-0106-Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| small, greater quality loss |
| [openchat-3.5-0106.Q5_0.gguf](https://huggingface.co/second-state/OpenChat-3.5-0106-GGUF/blob/main/openchat-3.5-0106-Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [openchat-3.5-0106.Q5_K_M.gguf](https://huggingface.co/second-state/OpenChat-3.5-0106-GGUF/blob/main/openchat-3.5-0106-Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| large, very low quality loss - recommended |
| [openchat-3.5-0106.Q5_K_S.gguf](https://huggingface.co/second-state/OpenChat-3.5-0106-GGUF/blob/main/openchat-3.5-0106-Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| large, low quality loss - recommended |
| [openchat-3.5-0106.Q6_K.gguf](https://huggingface.co/second-state/OpenChat-3.5-0106-GGUF/blob/main/openchat-3.5-0106-Q6_K.gguf) | Q6_K | 6 | 5.94 GB| very large, extremely low quality loss |
| [openchat-3.5-0106.Q8_0.gguf](https://huggingface.co/second-state/OpenChat-3.5-0106-GGUF/blob/main/openchat-3.5-0106-Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| very large, extremely low quality loss - not recommended |
|
Colma/Llama-3.1-8B-bnb-4bit-wenyanwen
|
Colma
| 2024-08-29T04:40:05Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"unsloth",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-28T15:56:46Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ychoikr/test_trainer
|
ychoikr
| 2024-08-29T04:32:35Z | 5 | 0 | null |
[
"safetensors",
"bert",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"region:us"
] | null | 2024-08-29T04:31:47Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6643
- Accuracy: 0.635
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.7370 | 0.48 |
| No log | 2.0 | 50 | 0.6643 | 0.635 |
### Framework versions
- Transformers 4.43.3
- Pytorch 2.3.1+cu118
- Datasets 2.20.0
- Tokenizers 0.19.1
|
PotatoB/Model_Kinship_4-3
|
PotatoB
| 2024-08-29T04:32:17Z | 5 | 0 | null |
[
"safetensors",
"mistral",
"merge",
"mergekit",
"lazymergekit",
"PotatoB/evo_exp-point-3-2",
"PotatoB/evo_exp-point-3-4",
"license:apache-2.0",
"region:us"
] | null | 2024-08-29T04:29:08Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- PotatoB/evo_exp-point-3-2
- PotatoB/evo_exp-point-3-4
---
# evo_exp-point-4-7
evo_exp-point-4-7 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [PotatoB/evo_exp-point-3-2](https://huggingface.co/PotatoB/evo_exp-point-3-2)
* [PotatoB/evo_exp-point-3-4](https://huggingface.co/PotatoB/evo_exp-point-3-4)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: PotatoB/evo_exp-point-3-2
layer_range: [0, 32]
- model: PotatoB/evo_exp-point-3-4
layer_range: [0, 32]
merge_method: slerp
base_model: PotatoB/evo_exp-point-3-2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
Bagus/speecht5_finetuned_commonvoice_id
|
Bagus
| 2024-08-29T04:29:18Z | 91 | 2 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"text-to-speech",
"id",
"dataset:mozilla-foundation/common_voice_16_1",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2024-03-28T09:12:02Z |
---
language:
- id
license: mit
base_model: microsoft/speecht5_tts
tags:
- text-to-speech
datasets:
- mozilla-foundation/common_voice_16_1
model-index:
- name: speecht5_finetuned_commonvoice_id
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_commonvoice_id
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the mozilla-foundation/common_voice_16_1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4675
## How to use/inference
Follow the example below and adapt with your own need.
```
# ft_t5_id_inference.py
import sounddevice as sd
import torch
import torchaudio
from datasets import Audio, load_dataset
from transformers import (
SpeechT5ForTextToSpeech,
SpeechT5HifiGan,
SpeechT5Processor,
)
from utils import create_speaker_embedding
# load dataset and pre-trained model
dataset = load_dataset(
"mozilla-foundation/common_voice_16_1", "id", split="test")
model = SpeechT5ForTextToSpeech.from_pretrained(
"Bagus/speecht5_finetuned_commonvoice_id")
# process the text using checkpoint
checkpoint = "microsoft/speecht5_tts"
processor = SpeechT5Processor.from_pretrained(checkpoint)
sampling_rate = processor.feature_extractor.sampling_rate
dataset = dataset.cast_column("audio", Audio(sampling_rate=sampling_rate))
def prepare_dataset(example):
audio = example["audio"]
example = processor(
text=example["sentence"],
audio_target=audio["array"],
sampling_rate=audio["sampling_rate"],
return_attention_mask=False,
)
# strip off the batch dimension
example["labels"] = example["labels"][0]
# use SpeechBrain to obtain x-vector
example["speaker_embeddings"] = create_speaker_embedding(audio["array"])
return example
# prepare the speaker embeddings from the dataset and text
example = prepare_dataset(dataset[30])
speaker_embeddings = torch.tensor(example["speaker_embeddings"]).unsqueeze(0)
# prepare text to be converted to speech
text = "Saya suka baju yang berwarna merah tua."
inputs = processor(text=text, return_tensors="pt")
vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan")
speech = model.generate_speech(
inputs["input_ids"], speaker_embeddings, vocoder=vocoder)
sampling_rate = 16000
sd.play(speech, samplerate=sampling_rate, blocking=True)
# save the audio, signal needs to be in 2D tensor
torchaudio.save("output_t5_ft_cv16_id.wav", speech.unsqueeze(0), 16000)
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5394 | 4.28 | 1000 | 0.4908 |
| 0.5062 | 8.56 | 2000 | 0.4730 |
| 0.5074 | 12.83 | 3000 | 0.4700 |
| 0.5023 | 17.11 | 4000 | 0.4675 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
csikasote/mms-zeroshot-300m-bem
|
csikasote
| 2024-08-29T04:23:00Z | 96 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"BembaSpeech",
"mms",
"generated_from_trainer",
"base_model:mms-meta/mms-zeroshot-300m",
"base_model:finetune:mms-meta/mms-zeroshot-300m",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-08-25T07:40:13Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: mms-meta/mms-zeroshot-300m
tags:
- automatic-speech-recognition
- BembaSpeech
- mms
- generated_from_trainer
metrics:
- wer
model-index:
- name: mms-zeroshot-300m-bem
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-zeroshot-300m-bem
This model is a fine-tuned version of [mms-meta/mms-zeroshot-300m](https://huggingface.co/mms-meta/mms-zeroshot-300m) on the BEMBASPEECH - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1787
- Wer: 0.3583
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 4.6629 | 0.1778 | 500 | 0.3540 | 0.5421 |
| 0.6579 | 0.3556 | 1000 | 0.2588 | 0.4883 |
| 0.591 | 0.5334 | 1500 | 0.2552 | 0.4720 |
| 0.5467 | 0.7112 | 2000 | 0.2370 | 0.4542 |
| 0.5405 | 0.8890 | 2500 | 0.2376 | 0.4556 |
| 0.5027 | 1.0669 | 3000 | 0.2234 | 0.4307 |
| 0.5001 | 1.2447 | 3500 | 0.2176 | 0.4213 |
| 0.4962 | 1.4225 | 4000 | 0.2199 | 0.4205 |
| 0.486 | 1.6003 | 4500 | 0.2145 | 0.4167 |
| 0.47 | 1.7781 | 5000 | 0.2159 | 0.4169 |
| 0.4557 | 1.9559 | 5500 | 0.2099 | 0.4135 |
| 0.4514 | 2.1337 | 6000 | 0.2091 | 0.4100 |
| 0.4539 | 2.3115 | 6500 | 0.2038 | 0.4016 |
| 0.439 | 2.4893 | 7000 | 0.2041 | 0.4025 |
| 0.4378 | 2.6671 | 7500 | 0.2002 | 0.3916 |
| 0.4347 | 2.8450 | 8000 | 0.1961 | 0.3911 |
| 0.4278 | 3.0228 | 8500 | 0.1995 | 0.3923 |
| 0.4117 | 3.2006 | 9000 | 0.1959 | 0.3892 |
| 0.4149 | 3.3784 | 9500 | 0.1926 | 0.3859 |
| 0.4148 | 3.5562 | 10000 | 0.1958 | 0.3804 |
| 0.4009 | 3.7340 | 10500 | 0.1930 | 0.3790 |
| 0.4174 | 3.9118 | 11000 | 0.1955 | 0.3823 |
| 0.4012 | 4.0896 | 11500 | 0.1950 | 0.3812 |
| 0.3974 | 4.2674 | 12000 | 0.1934 | 0.3773 |
| 0.3943 | 4.4452 | 12500 | 0.1845 | 0.3720 |
| 0.4071 | 4.6230 | 13000 | 0.1920 | 0.3839 |
| 0.3968 | 4.8009 | 13500 | 0.1867 | 0.3743 |
| 0.3795 | 4.9787 | 14000 | 0.1872 | 0.3713 |
| 0.3856 | 5.1565 | 14500 | 0.1869 | 0.3737 |
| 0.3706 | 5.3343 | 15000 | 0.1903 | 0.3766 |
| 0.3784 | 5.5121 | 15500 | 0.1861 | 0.3683 |
| 0.3777 | 5.6899 | 16000 | 0.1866 | 0.3713 |
| 0.3861 | 5.8677 | 16500 | 0.1812 | 0.3637 |
| 0.3711 | 6.0455 | 17000 | 0.1842 | 0.3667 |
| 0.374 | 6.2233 | 17500 | 0.1815 | 0.3618 |
| 0.3539 | 6.4011 | 18000 | 0.1815 | 0.3647 |
| 0.3625 | 6.5789 | 18500 | 0.1785 | 0.3589 |
| 0.3599 | 6.7568 | 19000 | 0.1795 | 0.3621 |
| 0.3654 | 6.9346 | 19500 | 0.1822 | 0.3624 |
| 0.3693 | 7.1124 | 20000 | 0.1792 | 0.3612 |
| 0.3519 | 7.2902 | 20500 | 0.1800 | 0.3675 |
| 0.3553 | 7.4680 | 21000 | 0.1808 | 0.3640 |
| 0.3451 | 7.6458 | 21500 | 0.1808 | 0.3620 |
| 0.3558 | 7.8236 | 22000 | 0.1794 | 0.3610 |
| 0.3595 | 8.0014 | 22500 | 0.1772 | 0.3576 |
| 0.3404 | 8.1792 | 23000 | 0.1788 | 0.3581 |
| 0.3593 | 8.3570 | 23500 | 0.1782 | 0.3580 |
| 0.3471 | 8.5349 | 24000 | 0.1797 | 0.3606 |
| 0.3497 | 8.7127 | 24500 | 0.1778 | 0.3588 |
| 0.3398 | 8.8905 | 25000 | 0.1775 | 0.3583 |
| 0.3444 | 9.0683 | 25500 | 0.1796 | 0.3586 |
| 0.3366 | 9.2461 | 26000 | 0.1785 | 0.3574 |
| 0.3434 | 9.4239 | 26500 | 0.1781 | 0.3592 |
| 0.3426 | 9.6017 | 27000 | 0.1786 | 0.3593 |
| 0.3496 | 9.7795 | 27500 | 0.1787 | 0.3590 |
| 0.334 | 9.9573 | 28000 | 0.1788 | 0.3588 |
### Framework versions
- Transformers 4.45.0.dev0
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
luaqi/sn29_merged_v8
|
luaqi
| 2024-08-29T04:11:36Z | 50 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-29T04:08:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PotatoB/Model_Kinship_4-2
|
PotatoB
| 2024-08-29T04:08:54Z | 6 | 0 | null |
[
"safetensors",
"mistral",
"merge",
"mergekit",
"lazymergekit",
"PotatoB/evo_exp-point-2-1",
"PotatoB/evo_exp-point-3-4",
"license:apache-2.0",
"region:us"
] | null | 2024-08-29T04:06:06Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- PotatoB/evo_exp-point-2-1
- PotatoB/evo_exp-point-3-4
---
# evo_exp-point-4-6
evo_exp-point-4-6 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [PotatoB/evo_exp-point-2-1](https://huggingface.co/PotatoB/evo_exp-point-2-1)
* [PotatoB/evo_exp-point-3-4](https://huggingface.co/PotatoB/evo_exp-point-3-4)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: PotatoB/evo_exp-point-2-1
layer_range: [0, 32]
- model: PotatoB/evo_exp-point-3-4
layer_range: [0, 32]
merge_method: slerp
base_model: PotatoB/evo_exp-point-2-1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
yixuan-chia/snowflake-arctic-embed-m-long-Q8_0-GGUF
|
yixuan-chia
| 2024-08-29T03:57:29Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"mteb",
"arctic",
"snowflake-arctic-embed",
"transformers.js",
"llama-cpp",
"gguf-my-repo",
"base_model:Snowflake/snowflake-arctic-embed-m-long",
"base_model:quantized:Snowflake/snowflake-arctic-embed-m-long",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-08-29T03:57:26Z |
---
base_model: Snowflake/snowflake-arctic-embed-m-long
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- arctic
- snowflake-arctic-embed
- transformers.js
- llama-cpp
- gguf-my-repo
model-index:
- name: snowflake-arctic-m-long
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 78.4776119402985
- type: ap
value: 42.34374238166049
- type: f1
value: 72.51164234732224
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 78.7416
- type: ap
value: 73.12074819362377
- type: f1
value: 78.64057339708795
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 39.926
- type: f1
value: 39.35531993117573
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 34.851
- type: map_at_10
value: 51.473
- type: map_at_100
value: 52.103
- type: map_at_1000
value: 52.105000000000004
- type: map_at_3
value: 46.776
- type: map_at_5
value: 49.617
- type: mrr_at_1
value: 35.491
- type: mrr_at_10
value: 51.73799999999999
- type: mrr_at_100
value: 52.37500000000001
- type: mrr_at_1000
value: 52.378
- type: mrr_at_3
value: 46.965
- type: mrr_at_5
value: 49.878
- type: ndcg_at_1
value: 34.851
- type: ndcg_at_10
value: 60.364
- type: ndcg_at_100
value: 62.888999999999996
- type: ndcg_at_1000
value: 62.946000000000005
- type: ndcg_at_3
value: 50.807
- type: ndcg_at_5
value: 55.901
- type: precision_at_1
value: 34.851
- type: precision_at_10
value: 8.855
- type: precision_at_100
value: 0.992
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.839
- type: precision_at_5
value: 14.963999999999999
- type: recall_at_1
value: 34.851
- type: recall_at_10
value: 88.549
- type: recall_at_100
value: 99.21799999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 62.517999999999994
- type: recall_at_5
value: 74.822
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 45.5554998405317
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 35.614248811397005
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 61.355489424753884
- type: mrr
value: 75.49443784900849
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 89.17311056578292
- type: cos_sim_spearman
value: 88.24237210809322
- type: euclidean_pearson
value: 87.3188065853646
- type: euclidean_spearman
value: 88.24237210809322
- type: manhattan_pearson
value: 86.89499710049658
- type: manhattan_spearman
value: 87.85441146091777
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 80.26298701298703
- type: f1
value: 79.68356764080303
- task:
type: Clustering
dataset:
name: MTEB BigPatentClustering
type: jinaai/big-patent-clustering
config: default
split: test
revision: 62d5330920bca426ce9d3c76ea914f15fc83e891
metrics:
- type: v_measure
value: 20.923883720813706
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 36.16058801465044
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 30.1402356118627
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: mteb/cqadupstack-android
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 35.612
- type: map_at_10
value: 47.117
- type: map_at_100
value: 48.711
- type: map_at_1000
value: 48.826
- type: map_at_3
value: 43.858999999999995
- type: map_at_5
value: 45.612
- type: mrr_at_1
value: 42.918
- type: mrr_at_10
value: 52.806
- type: mrr_at_100
value: 53.564
- type: mrr_at_1000
value: 53.596999999999994
- type: mrr_at_3
value: 50.453
- type: mrr_at_5
value: 51.841
- type: ndcg_at_1
value: 42.918
- type: ndcg_at_10
value: 53.291999999999994
- type: ndcg_at_100
value: 58.711999999999996
- type: ndcg_at_1000
value: 60.317
- type: ndcg_at_3
value: 48.855
- type: ndcg_at_5
value: 50.778
- type: precision_at_1
value: 42.918
- type: precision_at_10
value: 9.927999999999999
- type: precision_at_100
value: 1.592
- type: precision_at_1000
value: 0.201
- type: precision_at_3
value: 23.366999999999997
- type: precision_at_5
value: 16.366
- type: recall_at_1
value: 35.612
- type: recall_at_10
value: 64.671
- type: recall_at_100
value: 86.97
- type: recall_at_1000
value: 96.99600000000001
- type: recall_at_3
value: 51.37199999999999
- type: recall_at_5
value: 57.094
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: mteb/cqadupstack-english
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 33.742
- type: map_at_10
value: 44.49
- type: map_at_100
value: 45.781
- type: map_at_1000
value: 45.902
- type: map_at_3
value: 41.453
- type: map_at_5
value: 43.251
- type: mrr_at_1
value: 42.357
- type: mrr_at_10
value: 50.463
- type: mrr_at_100
value: 51.17
- type: mrr_at_1000
value: 51.205999999999996
- type: mrr_at_3
value: 48.397
- type: mrr_at_5
value: 49.649
- type: ndcg_at_1
value: 42.357
- type: ndcg_at_10
value: 50.175000000000004
- type: ndcg_at_100
value: 54.491
- type: ndcg_at_1000
value: 56.282
- type: ndcg_at_3
value: 46.159
- type: ndcg_at_5
value: 48.226
- type: precision_at_1
value: 42.357
- type: precision_at_10
value: 9.382
- type: precision_at_100
value: 1.473
- type: precision_at_1000
value: 0.191
- type: precision_at_3
value: 22.187
- type: precision_at_5
value: 15.758
- type: recall_at_1
value: 33.742
- type: recall_at_10
value: 59.760999999999996
- type: recall_at_100
value: 77.89500000000001
- type: recall_at_1000
value: 89.005
- type: recall_at_3
value: 47.872
- type: recall_at_5
value: 53.559
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: mteb/cqadupstack-gaming
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 43.883
- type: map_at_10
value: 56.464999999999996
- type: map_at_100
value: 57.394
- type: map_at_1000
value: 57.443999999999996
- type: map_at_3
value: 53.169
- type: map_at_5
value: 54.984
- type: mrr_at_1
value: 50.470000000000006
- type: mrr_at_10
value: 59.997
- type: mrr_at_100
value: 60.586
- type: mrr_at_1000
value: 60.61
- type: mrr_at_3
value: 57.837
- type: mrr_at_5
value: 59.019
- type: ndcg_at_1
value: 50.470000000000006
- type: ndcg_at_10
value: 62.134
- type: ndcg_at_100
value: 65.69500000000001
- type: ndcg_at_1000
value: 66.674
- type: ndcg_at_3
value: 56.916999999999994
- type: ndcg_at_5
value: 59.312
- type: precision_at_1
value: 50.470000000000006
- type: precision_at_10
value: 9.812
- type: precision_at_100
value: 1.25
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 25.119999999999997
- type: precision_at_5
value: 17.016000000000002
- type: recall_at_1
value: 43.883
- type: recall_at_10
value: 75.417
- type: recall_at_100
value: 90.545
- type: recall_at_1000
value: 97.44500000000001
- type: recall_at_3
value: 61.306000000000004
- type: recall_at_5
value: 67.244
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: mteb/cqadupstack-gis
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 29.813000000000002
- type: map_at_10
value: 38.627
- type: map_at_100
value: 39.735
- type: map_at_1000
value: 39.806000000000004
- type: map_at_3
value: 36.283
- type: map_at_5
value: 37.491
- type: mrr_at_1
value: 32.316
- type: mrr_at_10
value: 40.752
- type: mrr_at_100
value: 41.699000000000005
- type: mrr_at_1000
value: 41.749
- type: mrr_at_3
value: 38.531
- type: mrr_at_5
value: 39.706
- type: ndcg_at_1
value: 32.316
- type: ndcg_at_10
value: 43.524
- type: ndcg_at_100
value: 48.648
- type: ndcg_at_1000
value: 50.405
- type: ndcg_at_3
value: 38.928000000000004
- type: ndcg_at_5
value: 40.967
- type: precision_at_1
value: 32.316
- type: precision_at_10
value: 6.451999999999999
- type: precision_at_100
value: 0.9490000000000001
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 16.384
- type: precision_at_5
value: 11.006
- type: recall_at_1
value: 29.813000000000002
- type: recall_at_10
value: 56.562999999999995
- type: recall_at_100
value: 79.452
- type: recall_at_1000
value: 92.715
- type: recall_at_3
value: 43.985
- type: recall_at_5
value: 49.001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: mteb/cqadupstack-mathematica
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 19.961000000000002
- type: map_at_10
value: 28.026
- type: map_at_100
value: 29.212
- type: map_at_1000
value: 29.332
- type: map_at_3
value: 25.296999999999997
- type: map_at_5
value: 26.832
- type: mrr_at_1
value: 24.627
- type: mrr_at_10
value: 33.045
- type: mrr_at_100
value: 33.944
- type: mrr_at_1000
value: 34.013
- type: mrr_at_3
value: 30.307000000000002
- type: mrr_at_5
value: 31.874000000000002
- type: ndcg_at_1
value: 24.627
- type: ndcg_at_10
value: 33.414
- type: ndcg_at_100
value: 39.061
- type: ndcg_at_1000
value: 41.795
- type: ndcg_at_3
value: 28.377000000000002
- type: ndcg_at_5
value: 30.781999999999996
- type: precision_at_1
value: 24.627
- type: precision_at_10
value: 6.02
- type: precision_at_100
value: 1.035
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 13.516
- type: precision_at_5
value: 9.851
- type: recall_at_1
value: 19.961000000000002
- type: recall_at_10
value: 45.174
- type: recall_at_100
value: 69.69
- type: recall_at_1000
value: 89.24600000000001
- type: recall_at_3
value: 31.062
- type: recall_at_5
value: 37.193
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: mteb/cqadupstack-physics
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 32.080999999999996
- type: map_at_10
value: 42.177
- type: map_at_100
value: 43.431999999999995
- type: map_at_1000
value: 43.533
- type: map_at_3
value: 38.721
- type: map_at_5
value: 40.669
- type: mrr_at_1
value: 38.787
- type: mrr_at_10
value: 47.762
- type: mrr_at_100
value: 48.541000000000004
- type: mrr_at_1000
value: 48.581
- type: mrr_at_3
value: 45.123999999999995
- type: mrr_at_5
value: 46.639
- type: ndcg_at_1
value: 38.787
- type: ndcg_at_10
value: 48.094
- type: ndcg_at_100
value: 53.291
- type: ndcg_at_1000
value: 55.21
- type: ndcg_at_3
value: 42.721
- type: ndcg_at_5
value: 45.301
- type: precision_at_1
value: 38.787
- type: precision_at_10
value: 8.576
- type: precision_at_100
value: 1.306
- type: precision_at_1000
value: 0.164
- type: precision_at_3
value: 19.698
- type: precision_at_5
value: 14.013
- type: recall_at_1
value: 32.080999999999996
- type: recall_at_10
value: 59.948
- type: recall_at_100
value: 81.811
- type: recall_at_1000
value: 94.544
- type: recall_at_3
value: 44.903999999999996
- type: recall_at_5
value: 51.763999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: mteb/cqadupstack-programmers
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 28.869
- type: map_at_10
value: 38.954
- type: map_at_100
value: 40.233000000000004
- type: map_at_1000
value: 40.332
- type: map_at_3
value: 35.585
- type: map_at_5
value: 37.476
- type: mrr_at_1
value: 35.959
- type: mrr_at_10
value: 44.800000000000004
- type: mrr_at_100
value: 45.609
- type: mrr_at_1000
value: 45.655
- type: mrr_at_3
value: 42.333
- type: mrr_at_5
value: 43.68
- type: ndcg_at_1
value: 35.959
- type: ndcg_at_10
value: 44.957
- type: ndcg_at_100
value: 50.275000000000006
- type: ndcg_at_1000
value: 52.29899999999999
- type: ndcg_at_3
value: 39.797
- type: ndcg_at_5
value: 42.128
- type: precision_at_1
value: 35.959
- type: precision_at_10
value: 8.185
- type: precision_at_100
value: 1.261
- type: precision_at_1000
value: 0.159
- type: precision_at_3
value: 18.988
- type: precision_at_5
value: 13.516
- type: recall_at_1
value: 28.869
- type: recall_at_10
value: 57.154
- type: recall_at_100
value: 79.764
- type: recall_at_1000
value: 93.515
- type: recall_at_3
value: 42.364000000000004
- type: recall_at_5
value: 48.756
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 29.31008333333333
- type: map_at_10
value: 38.81849999999999
- type: map_at_100
value: 40.05058333333334
- type: map_at_1000
value: 40.16116666666667
- type: map_at_3
value: 35.91441666666667
- type: map_at_5
value: 37.526583333333335
- type: mrr_at_1
value: 34.60066666666667
- type: mrr_at_10
value: 43.08858333333333
- type: mrr_at_100
value: 43.927749999999996
- type: mrr_at_1000
value: 43.97866666666667
- type: mrr_at_3
value: 40.72775
- type: mrr_at_5
value: 42.067249999999994
- type: ndcg_at_1
value: 34.60066666666667
- type: ndcg_at_10
value: 44.20841666666667
- type: ndcg_at_100
value: 49.32866666666667
- type: ndcg_at_1000
value: 51.373999999999995
- type: ndcg_at_3
value: 39.452083333333334
- type: ndcg_at_5
value: 41.67
- type: precision_at_1
value: 34.60066666666667
- type: precision_at_10
value: 7.616583333333334
- type: precision_at_100
value: 1.20175
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 17.992
- type: precision_at_5
value: 12.658416666666666
- type: recall_at_1
value: 29.31008333333333
- type: recall_at_10
value: 55.81900000000001
- type: recall_at_100
value: 78.06308333333334
- type: recall_at_1000
value: 92.10641666666668
- type: recall_at_3
value: 42.50166666666667
- type: recall_at_5
value: 48.26108333333333
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: mteb/cqadupstack-stats
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 26.773000000000003
- type: map_at_10
value: 34.13
- type: map_at_100
value: 35.113
- type: map_at_1000
value: 35.211
- type: map_at_3
value: 31.958
- type: map_at_5
value: 33.080999999999996
- type: mrr_at_1
value: 30.061
- type: mrr_at_10
value: 37.061
- type: mrr_at_100
value: 37.865
- type: mrr_at_1000
value: 37.939
- type: mrr_at_3
value: 34.995
- type: mrr_at_5
value: 36.092
- type: ndcg_at_1
value: 30.061
- type: ndcg_at_10
value: 38.391999999999996
- type: ndcg_at_100
value: 43.13
- type: ndcg_at_1000
value: 45.449
- type: ndcg_at_3
value: 34.411
- type: ndcg_at_5
value: 36.163000000000004
- type: precision_at_1
value: 30.061
- type: precision_at_10
value: 5.982
- type: precision_at_100
value: 0.911
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 14.673
- type: precision_at_5
value: 10.030999999999999
- type: recall_at_1
value: 26.773000000000003
- type: recall_at_10
value: 48.445
- type: recall_at_100
value: 69.741
- type: recall_at_1000
value: 86.59
- type: recall_at_3
value: 37.576
- type: recall_at_5
value: 41.948
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: mteb/cqadupstack-tex
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 18.556
- type: map_at_10
value: 26.340999999999998
- type: map_at_100
value: 27.560000000000002
- type: map_at_1000
value: 27.685
- type: map_at_3
value: 24.136
- type: map_at_5
value: 25.34
- type: mrr_at_1
value: 22.368
- type: mrr_at_10
value: 30.192999999999998
- type: mrr_at_100
value: 31.183
- type: mrr_at_1000
value: 31.258000000000003
- type: mrr_at_3
value: 28.223
- type: mrr_at_5
value: 29.294999999999998
- type: ndcg_at_1
value: 22.368
- type: ndcg_at_10
value: 31.029
- type: ndcg_at_100
value: 36.768
- type: ndcg_at_1000
value: 39.572
- type: ndcg_at_3
value: 27.197
- type: ndcg_at_5
value: 28.912
- type: precision_at_1
value: 22.368
- type: precision_at_10
value: 5.606
- type: precision_at_100
value: 0.9979999999999999
- type: precision_at_1000
value: 0.14100000000000001
- type: precision_at_3
value: 12.892999999999999
- type: precision_at_5
value: 9.16
- type: recall_at_1
value: 18.556
- type: recall_at_10
value: 41.087
- type: recall_at_100
value: 66.92
- type: recall_at_1000
value: 86.691
- type: recall_at_3
value: 30.415
- type: recall_at_5
value: 34.813
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: mteb/cqadupstack-unix
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 29.953999999999997
- type: map_at_10
value: 39.633
- type: map_at_100
value: 40.923
- type: map_at_1000
value: 41.016000000000005
- type: map_at_3
value: 36.609
- type: map_at_5
value: 38.443
- type: mrr_at_1
value: 35.354
- type: mrr_at_10
value: 43.718
- type: mrr_at_100
value: 44.651999999999994
- type: mrr_at_1000
value: 44.696000000000005
- type: mrr_at_3
value: 41.154
- type: mrr_at_5
value: 42.730000000000004
- type: ndcg_at_1
value: 35.354
- type: ndcg_at_10
value: 44.933
- type: ndcg_at_100
value: 50.577000000000005
- type: ndcg_at_1000
value: 52.428
- type: ndcg_at_3
value: 39.833
- type: ndcg_at_5
value: 42.465
- type: precision_at_1
value: 35.354
- type: precision_at_10
value: 7.416
- type: precision_at_100
value: 1.157
- type: precision_at_1000
value: 0.14100000000000001
- type: precision_at_3
value: 17.817
- type: precision_at_5
value: 12.687000000000001
- type: recall_at_1
value: 29.953999999999997
- type: recall_at_10
value: 56.932
- type: recall_at_100
value: 80.93900000000001
- type: recall_at_1000
value: 93.582
- type: recall_at_3
value: 43.192
- type: recall_at_5
value: 49.757
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: mteb/cqadupstack-webmasters
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 27.85
- type: map_at_10
value: 37.68
- type: map_at_100
value: 39.295
- type: map_at_1000
value: 39.527
- type: map_at_3
value: 35.036
- type: map_at_5
value: 36.269
- type: mrr_at_1
value: 33.004
- type: mrr_at_10
value: 42.096000000000004
- type: mrr_at_100
value: 43.019
- type: mrr_at_1000
value: 43.071
- type: mrr_at_3
value: 39.987
- type: mrr_at_5
value: 40.995
- type: ndcg_at_1
value: 33.004
- type: ndcg_at_10
value: 43.461
- type: ndcg_at_100
value: 49.138
- type: ndcg_at_1000
value: 51.50900000000001
- type: ndcg_at_3
value: 39.317
- type: ndcg_at_5
value: 40.760999999999996
- type: precision_at_1
value: 33.004
- type: precision_at_10
value: 8.161999999999999
- type: precision_at_100
value: 1.583
- type: precision_at_1000
value: 0.245
- type: precision_at_3
value: 18.445
- type: precision_at_5
value: 12.885
- type: recall_at_1
value: 27.85
- type: recall_at_10
value: 54.419
- type: recall_at_100
value: 79.742
- type: recall_at_1000
value: 93.97
- type: recall_at_3
value: 42.149
- type: recall_at_5
value: 46.165
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: mteb/cqadupstack-wordpress
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 24.627
- type: map_at_10
value: 32.182
- type: map_at_100
value: 33.217999999999996
- type: map_at_1000
value: 33.32
- type: map_at_3
value: 28.866999999999997
- type: map_at_5
value: 30.871
- type: mrr_at_1
value: 26.987
- type: mrr_at_10
value: 34.37
- type: mrr_at_100
value: 35.301
- type: mrr_at_1000
value: 35.369
- type: mrr_at_3
value: 31.391999999999996
- type: mrr_at_5
value: 33.287
- type: ndcg_at_1
value: 26.987
- type: ndcg_at_10
value: 37.096000000000004
- type: ndcg_at_100
value: 42.158
- type: ndcg_at_1000
value: 44.548
- type: ndcg_at_3
value: 30.913
- type: ndcg_at_5
value: 34.245
- type: precision_at_1
value: 26.987
- type: precision_at_10
value: 5.878
- type: precision_at_100
value: 0.906
- type: precision_at_1000
value: 0.123
- type: precision_at_3
value: 12.815999999999999
- type: precision_at_5
value: 9.612
- type: recall_at_1
value: 24.627
- type: recall_at_10
value: 50.257
- type: recall_at_100
value: 73.288
- type: recall_at_1000
value: 90.97800000000001
- type: recall_at_3
value: 33.823
- type: recall_at_5
value: 41.839
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 17.343
- type: map_at_10
value: 28.59
- type: map_at_100
value: 30.591
- type: map_at_1000
value: 30.759999999999998
- type: map_at_3
value: 24.197
- type: map_at_5
value: 26.433
- type: mrr_at_1
value: 39.609
- type: mrr_at_10
value: 51.107
- type: mrr_at_100
value: 51.87199999999999
- type: mrr_at_1000
value: 51.894
- type: mrr_at_3
value: 48.154
- type: mrr_at_5
value: 49.939
- type: ndcg_at_1
value: 39.609
- type: ndcg_at_10
value: 38.329
- type: ndcg_at_100
value: 45.573
- type: ndcg_at_1000
value: 48.405
- type: ndcg_at_3
value: 32.506
- type: ndcg_at_5
value: 34.331
- type: precision_at_1
value: 39.609
- type: precision_at_10
value: 11.668000000000001
- type: precision_at_100
value: 1.9539999999999997
- type: precision_at_1000
value: 0.249
- type: precision_at_3
value: 23.952
- type: precision_at_5
value: 17.902
- type: recall_at_1
value: 17.343
- type: recall_at_10
value: 43.704
- type: recall_at_100
value: 68.363
- type: recall_at_1000
value: 84.04599999999999
- type: recall_at_3
value: 29.028
- type: recall_at_5
value: 35.022
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 9.934999999999999
- type: map_at_10
value: 22.081
- type: map_at_100
value: 32.036
- type: map_at_1000
value: 33.803
- type: map_at_3
value: 15.687999999999999
- type: map_at_5
value: 18.357
- type: mrr_at_1
value: 70.75
- type: mrr_at_10
value: 78.506
- type: mrr_at_100
value: 78.874
- type: mrr_at_1000
value: 78.88300000000001
- type: mrr_at_3
value: 77.667
- type: mrr_at_5
value: 78.342
- type: ndcg_at_1
value: 57.25
- type: ndcg_at_10
value: 45.286
- type: ndcg_at_100
value: 50.791
- type: ndcg_at_1000
value: 58.021
- type: ndcg_at_3
value: 49.504
- type: ndcg_at_5
value: 47.03
- type: precision_at_1
value: 70.75
- type: precision_at_10
value: 36.425000000000004
- type: precision_at_100
value: 11.953
- type: precision_at_1000
value: 2.248
- type: precision_at_3
value: 53.25
- type: precision_at_5
value: 46.150000000000006
- type: recall_at_1
value: 9.934999999999999
- type: recall_at_10
value: 27.592
- type: recall_at_100
value: 58.089
- type: recall_at_1000
value: 81.025
- type: recall_at_3
value: 17.048
- type: recall_at_5
value: 20.834
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 47.25999999999999
- type: f1
value: 43.83371155132253
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 73.68900000000001
- type: map_at_10
value: 82.878
- type: map_at_100
value: 83.084
- type: map_at_1000
value: 83.097
- type: map_at_3
value: 81.528
- type: map_at_5
value: 82.432
- type: mrr_at_1
value: 79.49300000000001
- type: mrr_at_10
value: 87.24300000000001
- type: mrr_at_100
value: 87.3
- type: mrr_at_1000
value: 87.301
- type: mrr_at_3
value: 86.359
- type: mrr_at_5
value: 87.01
- type: ndcg_at_1
value: 79.49300000000001
- type: ndcg_at_10
value: 86.894
- type: ndcg_at_100
value: 87.6
- type: ndcg_at_1000
value: 87.79299999999999
- type: ndcg_at_3
value: 84.777
- type: ndcg_at_5
value: 86.08
- type: precision_at_1
value: 79.49300000000001
- type: precision_at_10
value: 10.578
- type: precision_at_100
value: 1.117
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 32.592999999999996
- type: precision_at_5
value: 20.423
- type: recall_at_1
value: 73.68900000000001
- type: recall_at_10
value: 94.833
- type: recall_at_100
value: 97.554
- type: recall_at_1000
value: 98.672
- type: recall_at_3
value: 89.236
- type: recall_at_5
value: 92.461
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 20.59
- type: map_at_10
value: 34.089000000000006
- type: map_at_100
value: 35.796
- type: map_at_1000
value: 35.988
- type: map_at_3
value: 29.877
- type: map_at_5
value: 32.202999999999996
- type: mrr_at_1
value: 41.049
- type: mrr_at_10
value: 50.370000000000005
- type: mrr_at_100
value: 51.209
- type: mrr_at_1000
value: 51.247
- type: mrr_at_3
value: 48.122
- type: mrr_at_5
value: 49.326
- type: ndcg_at_1
value: 41.049
- type: ndcg_at_10
value: 42.163000000000004
- type: ndcg_at_100
value: 48.638999999999996
- type: ndcg_at_1000
value: 51.775000000000006
- type: ndcg_at_3
value: 38.435
- type: ndcg_at_5
value: 39.561
- type: precision_at_1
value: 41.049
- type: precision_at_10
value: 11.481
- type: precision_at_100
value: 1.8239999999999998
- type: precision_at_1000
value: 0.24
- type: precision_at_3
value: 25.257
- type: precision_at_5
value: 18.519
- type: recall_at_1
value: 20.59
- type: recall_at_10
value: 49.547999999999995
- type: recall_at_100
value: 73.676
- type: recall_at_1000
value: 92.269
- type: recall_at_3
value: 35.656
- type: recall_at_5
value: 41.455
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 39.932
- type: map_at_10
value: 64.184
- type: map_at_100
value: 65.06
- type: map_at_1000
value: 65.109
- type: map_at_3
value: 60.27
- type: map_at_5
value: 62.732
- type: mrr_at_1
value: 79.865
- type: mrr_at_10
value: 85.99799999999999
- type: mrr_at_100
value: 86.13
- type: mrr_at_1000
value: 86.13300000000001
- type: mrr_at_3
value: 85.136
- type: mrr_at_5
value: 85.69200000000001
- type: ndcg_at_1
value: 79.865
- type: ndcg_at_10
value: 72.756
- type: ndcg_at_100
value: 75.638
- type: ndcg_at_1000
value: 76.589
- type: ndcg_at_3
value: 67.38199999999999
- type: ndcg_at_5
value: 70.402
- type: precision_at_1
value: 79.865
- type: precision_at_10
value: 15.387999999999998
- type: precision_at_100
value: 1.7610000000000001
- type: precision_at_1000
value: 0.189
- type: precision_at_3
value: 43.394
- type: precision_at_5
value: 28.424
- type: recall_at_1
value: 39.932
- type: recall_at_10
value: 76.941
- type: recall_at_100
value: 88.062
- type: recall_at_1000
value: 94.396
- type: recall_at_3
value: 65.091
- type: recall_at_5
value: 71.06
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 71.7904
- type: ap
value: 65.82899456730257
- type: f1
value: 71.56611877410202
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 21.931
- type: map_at_10
value: 34.849999999999994
- type: map_at_100
value: 36.033
- type: map_at_1000
value: 36.08
- type: map_at_3
value: 30.842000000000002
- type: map_at_5
value: 33.229
- type: mrr_at_1
value: 22.55
- type: mrr_at_10
value: 35.436
- type: mrr_at_100
value: 36.563
- type: mrr_at_1000
value: 36.604
- type: mrr_at_3
value: 31.507
- type: mrr_at_5
value: 33.851
- type: ndcg_at_1
value: 22.55
- type: ndcg_at_10
value: 41.969
- type: ndcg_at_100
value: 47.576
- type: ndcg_at_1000
value: 48.731
- type: ndcg_at_3
value: 33.894000000000005
- type: ndcg_at_5
value: 38.133
- type: precision_at_1
value: 22.55
- type: precision_at_10
value: 6.660000000000001
- type: precision_at_100
value: 0.946
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.532
- type: precision_at_5
value: 10.865
- type: recall_at_1
value: 21.931
- type: recall_at_10
value: 63.841
- type: recall_at_100
value: 89.47699999999999
- type: recall_at_1000
value: 98.259
- type: recall_at_3
value: 42.063
- type: recall_at_5
value: 52.21
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.03921568627452
- type: f1
value: 92.56400672314416
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 63.515731874145
- type: f1
value: 44.922310875523216
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (eng)
type: masakhane/masakhanews
config: eng
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: accuracy
value: 77.57383966244727
- type: f1
value: 76.55222378218293
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (eng)
type: masakhane/masakhanews
config: eng
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: v_measure
value: 62.74836240280833
- type: v_measure
value: 24.414348715238184
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.54673839946201
- type: f1
value: 64.61004101532164
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.11365164761264
- type: f1
value: 72.01684013680978
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.123671999617297
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 26.72684341430875
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 29.910228061734816
- type: mrr
value: 30.835255982532477
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 5.6770000000000005
- type: map_at_10
value: 13.15
- type: map_at_100
value: 16.205
- type: map_at_1000
value: 17.580000000000002
- type: map_at_3
value: 9.651
- type: map_at_5
value: 11.142000000000001
- type: mrr_at_1
value: 47.678
- type: mrr_at_10
value: 56.257000000000005
- type: mrr_at_100
value: 56.708000000000006
- type: mrr_at_1000
value: 56.751
- type: mrr_at_3
value: 54.128
- type: mrr_at_5
value: 55.181000000000004
- type: ndcg_at_1
value: 45.511
- type: ndcg_at_10
value: 35.867
- type: ndcg_at_100
value: 31.566
- type: ndcg_at_1000
value: 40.077
- type: ndcg_at_3
value: 41.9
- type: ndcg_at_5
value: 39.367999999999995
- type: precision_at_1
value: 47.678
- type: precision_at_10
value: 26.842
- type: precision_at_100
value: 7.991
- type: precision_at_1000
value: 2.0469999999999997
- type: precision_at_3
value: 39.938
- type: precision_at_5
value: 34.613
- type: recall_at_1
value: 5.6770000000000005
- type: recall_at_10
value: 17.119999999999997
- type: recall_at_100
value: 30.828
- type: recall_at_1000
value: 62.082
- type: recall_at_3
value: 10.456
- type: recall_at_5
value: 12.903999999999998
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 39.021
- type: map_at_10
value: 54.976
- type: map_at_100
value: 55.793000000000006
- type: map_at_1000
value: 55.811
- type: map_at_3
value: 50.759
- type: map_at_5
value: 53.429
- type: mrr_at_1
value: 43.308
- type: mrr_at_10
value: 57.118
- type: mrr_at_100
value: 57.69499999999999
- type: mrr_at_1000
value: 57.704
- type: mrr_at_3
value: 53.848
- type: mrr_at_5
value: 55.915000000000006
- type: ndcg_at_1
value: 43.308
- type: ndcg_at_10
value: 62.33800000000001
- type: ndcg_at_100
value: 65.61099999999999
- type: ndcg_at_1000
value: 65.995
- type: ndcg_at_3
value: 54.723
- type: ndcg_at_5
value: 59.026
- type: precision_at_1
value: 43.308
- type: precision_at_10
value: 9.803
- type: precision_at_100
value: 1.167
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 24.334
- type: precision_at_5
value: 17.144000000000002
- type: recall_at_1
value: 39.021
- type: recall_at_10
value: 82.37299999999999
- type: recall_at_100
value: 96.21499999999999
- type: recall_at_1000
value: 99.02499999999999
- type: recall_at_3
value: 63.031000000000006
- type: recall_at_5
value: 72.856
- task:
type: Classification
dataset:
name: MTEB NewsClassification
type: ag_news
config: default
split: test
revision: eb185aade064a813bc0b7f42de02595523103ca4
metrics:
- type: accuracy
value: 78.03289473684211
- type: f1
value: 77.89323745730803
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (en)
type: GEM/opusparcus
config: en
split: test
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cos_sim_accuracy
value: 99.89816700610999
- type: cos_sim_ap
value: 100.0
- type: cos_sim_f1
value: 99.9490575649516
- type: cos_sim_precision
value: 100.0
- type: cos_sim_recall
value: 99.89816700610999
- type: dot_accuracy
value: 99.89816700610999
- type: dot_ap
value: 100.0
- type: dot_f1
value: 99.9490575649516
- type: dot_precision
value: 100.0
- type: dot_recall
value: 99.89816700610999
- type: euclidean_accuracy
value: 99.89816700610999
- type: euclidean_ap
value: 100.0
- type: euclidean_f1
value: 99.9490575649516
- type: euclidean_precision
value: 100.0
- type: euclidean_recall
value: 99.89816700610999
- type: manhattan_accuracy
value: 99.89816700610999
- type: manhattan_ap
value: 100.0
- type: manhattan_f1
value: 99.9490575649516
- type: manhattan_precision
value: 100.0
- type: manhattan_recall
value: 99.89816700610999
- type: max_accuracy
value: 99.89816700610999
- type: max_ap
value: 100.0
- type: max_f1
value: 99.9490575649516
- task:
type: PairClassification
dataset:
name: MTEB PawsX (en)
type: paws-x
config: en
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 61.75000000000001
- type: cos_sim_ap
value: 59.578879568280385
- type: cos_sim_f1
value: 62.50861474844934
- type: cos_sim_precision
value: 45.46365914786967
- type: cos_sim_recall
value: 100.0
- type: dot_accuracy
value: 61.75000000000001
- type: dot_ap
value: 59.57893088951573
- type: dot_f1
value: 62.50861474844934
- type: dot_precision
value: 45.46365914786967
- type: dot_recall
value: 100.0
- type: euclidean_accuracy
value: 61.75000000000001
- type: euclidean_ap
value: 59.578755624671686
- type: euclidean_f1
value: 62.50861474844934
- type: euclidean_precision
value: 45.46365914786967
- type: euclidean_recall
value: 100.0
- type: manhattan_accuracy
value: 61.75000000000001
- type: manhattan_ap
value: 59.58504334461159
- type: manhattan_f1
value: 62.50861474844934
- type: manhattan_precision
value: 45.46365914786967
- type: manhattan_recall
value: 100.0
- type: max_accuracy
value: 61.75000000000001
- type: max_ap
value: 59.58504334461159
- type: max_f1
value: 62.50861474844934
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: map_at_1
value: 70.186
- type: map_at_10
value: 83.875
- type: map_at_100
value: 84.514
- type: map_at_1000
value: 84.53500000000001
- type: map_at_3
value: 80.926
- type: map_at_5
value: 82.797
- type: mrr_at_1
value: 80.82000000000001
- type: mrr_at_10
value: 87.068
- type: mrr_at_100
value: 87.178
- type: mrr_at_1000
value: 87.18
- type: mrr_at_3
value: 86.055
- type: mrr_at_5
value: 86.763
- type: ndcg_at_1
value: 80.84
- type: ndcg_at_10
value: 87.723
- type: ndcg_at_100
value: 88.98700000000001
- type: ndcg_at_1000
value: 89.13499999999999
- type: ndcg_at_3
value: 84.821
- type: ndcg_at_5
value: 86.441
- type: precision_at_1
value: 80.84
- type: precision_at_10
value: 13.270000000000001
- type: precision_at_100
value: 1.516
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 37.013
- type: precision_at_5
value: 24.37
- type: recall_at_1
value: 70.186
- type: recall_at_10
value: 94.948
- type: recall_at_100
value: 99.223
- type: recall_at_1000
value: 99.932
- type: recall_at_3
value: 86.57000000000001
- type: recall_at_5
value: 91.157
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 50.24198927949519
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: v_measure
value: 61.452073078765544
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: map_at_1
value: 4.972
- type: map_at_10
value: 12.314
- type: map_at_100
value: 14.333000000000002
- type: map_at_1000
value: 14.628
- type: map_at_3
value: 8.972
- type: map_at_5
value: 10.724
- type: mrr_at_1
value: 24.4
- type: mrr_at_10
value: 35.257
- type: mrr_at_100
value: 36.297000000000004
- type: mrr_at_1000
value: 36.363
- type: mrr_at_3
value: 32.267
- type: mrr_at_5
value: 33.942
- type: ndcg_at_1
value: 24.4
- type: ndcg_at_10
value: 20.47
- type: ndcg_at_100
value: 28.111000000000004
- type: ndcg_at_1000
value: 33.499
- type: ndcg_at_3
value: 19.975
- type: ndcg_at_5
value: 17.293
- type: precision_at_1
value: 24.4
- type: precision_at_10
value: 10.440000000000001
- type: precision_at_100
value: 2.136
- type: precision_at_1000
value: 0.34299999999999997
- type: precision_at_3
value: 18.733
- type: precision_at_5
value: 15.120000000000001
- type: recall_at_1
value: 4.972
- type: recall_at_10
value: 21.157
- type: recall_at_100
value: 43.335
- type: recall_at_1000
value: 69.652
- type: recall_at_3
value: 11.417
- type: recall_at_5
value: 15.317
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cos_sim_pearson
value: 76.70295978506286
- type: cos_sim_spearman
value: 70.91162732446628
- type: euclidean_pearson
value: 73.25693688746031
- type: euclidean_spearman
value: 70.91162556180127
- type: manhattan_pearson
value: 73.27735004735767
- type: manhattan_spearman
value: 70.8856787022704
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 67.55878682646774
- type: cos_sim_spearman
value: 66.10824660353681
- type: euclidean_pearson
value: 64.93937270068541
- type: euclidean_spearman
value: 66.10824660353681
- type: manhattan_pearson
value: 64.96325555978984
- type: manhattan_spearman
value: 66.12052481638577
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 79.79979774019496
- type: cos_sim_spearman
value: 79.82293444619499
- type: euclidean_pearson
value: 79.4830436509311
- type: euclidean_spearman
value: 79.82293444619499
- type: manhattan_pearson
value: 79.49785594799296
- type: manhattan_spearman
value: 79.8280390479434
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 76.36839628231121
- type: cos_sim_spearman
value: 73.63809739428072
- type: euclidean_pearson
value: 74.93718121215906
- type: euclidean_spearman
value: 73.63810227650436
- type: manhattan_pearson
value: 74.8737197659424
- type: manhattan_spearman
value: 73.57534688126572
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 82.67482138157656
- type: cos_sim_spearman
value: 83.23485786963107
- type: euclidean_pearson
value: 82.50847772197369
- type: euclidean_spearman
value: 83.23485786963107
- type: manhattan_pearson
value: 82.48916218377576
- type: manhattan_spearman
value: 83.19756483500014
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 81.11626268793967
- type: cos_sim_spearman
value: 81.58184691061507
- type: euclidean_pearson
value: 80.65900869004938
- type: euclidean_spearman
value: 81.58184691061507
- type: manhattan_pearson
value: 80.67912306966772
- type: manhattan_spearman
value: 81.59957593393145
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 80.3140990821409
- type: cos_sim_spearman
value: 80.59196586367551
- type: euclidean_pearson
value: 80.73014029317672
- type: euclidean_spearman
value: 80.59196586367551
- type: manhattan_pearson
value: 80.5774325136987
- type: manhattan_spearman
value: 80.35102610546238
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 68.34450491529164
- type: cos_sim_spearman
value: 68.79451793414492
- type: euclidean_pearson
value: 68.75619738499324
- type: euclidean_spearman
value: 68.79451793414492
- type: manhattan_pearson
value: 68.75256119543882
- type: manhattan_spearman
value: 68.81836416978547
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 77.95580414975612
- type: cos_sim_spearman
value: 77.89671867168987
- type: euclidean_pearson
value: 77.61352097720862
- type: euclidean_spearman
value: 77.89671867168987
- type: manhattan_pearson
value: 77.65282228135632
- type: manhattan_spearman
value: 77.91730533156762
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (en)
type: PhilipMay/stsb_multi_mt
config: en
split: test
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
metrics:
- type: cos_sim_pearson
value: 77.95580421496413
- type: cos_sim_spearman
value: 77.89671867168987
- type: euclidean_pearson
value: 77.61352107168794
- type: euclidean_spearman
value: 77.89671867168987
- type: manhattan_pearson
value: 77.65282237231794
- type: manhattan_spearman
value: 77.91730533156762
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 79.22928110092924
- type: mrr
value: 94.46700902583257
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 56.011
- type: map_at_10
value: 65.544
- type: map_at_100
value: 66.034
- type: map_at_1000
value: 66.065
- type: map_at_3
value: 63.077000000000005
- type: map_at_5
value: 64.354
- type: mrr_at_1
value: 59.0
- type: mrr_at_10
value: 66.74900000000001
- type: mrr_at_100
value: 67.176
- type: mrr_at_1000
value: 67.203
- type: mrr_at_3
value: 65.056
- type: mrr_at_5
value: 65.956
- type: ndcg_at_1
value: 59.0
- type: ndcg_at_10
value: 69.95599999999999
- type: ndcg_at_100
value: 72.27
- type: ndcg_at_1000
value: 73.066
- type: ndcg_at_3
value: 65.837
- type: ndcg_at_5
value: 67.633
- type: precision_at_1
value: 59.0
- type: precision_at_10
value: 9.333
- type: precision_at_100
value: 1.053
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 26.0
- type: precision_at_5
value: 16.866999999999997
- type: recall_at_1
value: 56.011
- type: recall_at_10
value: 82.133
- type: recall_at_100
value: 92.767
- type: recall_at_1000
value: 99.0
- type: recall_at_3
value: 70.95
- type: recall_at_5
value: 75.556
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.81584158415842
- type: cos_sim_ap
value: 94.67482871230736
- type: cos_sim_f1
value: 90.67201604814443
- type: cos_sim_precision
value: 90.94567404426559
- type: cos_sim_recall
value: 90.4
- type: dot_accuracy
value: 99.81584158415842
- type: dot_ap
value: 94.67482871230737
- type: dot_f1
value: 90.67201604814443
- type: dot_precision
value: 90.94567404426559
- type: dot_recall
value: 90.4
- type: euclidean_accuracy
value: 99.81584158415842
- type: euclidean_ap
value: 94.67482871230737
- type: euclidean_f1
value: 90.67201604814443
- type: euclidean_precision
value: 90.94567404426559
- type: euclidean_recall
value: 90.4
- type: manhattan_accuracy
value: 99.81188118811882
- type: manhattan_ap
value: 94.6409082219286
- type: manhattan_f1
value: 90.50949050949052
- type: manhattan_precision
value: 90.41916167664671
- type: manhattan_recall
value: 90.60000000000001
- type: max_accuracy
value: 99.81584158415842
- type: max_ap
value: 94.67482871230737
- type: max_f1
value: 90.67201604814443
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 62.63494511649264
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 37.165838327685755
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 51.384873075208084
- type: mrr
value: 52.196439181733304
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 32.13690355567596
- type: cos_sim_spearman
value: 31.38349778638125
- type: dot_pearson
value: 32.13689596691593
- type: dot_spearman
value: 31.38349778638125
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: map_at_1
value: 0.26
- type: map_at_10
value: 2.08
- type: map_at_100
value: 12.598
- type: map_at_1000
value: 30.119
- type: map_at_3
value: 0.701
- type: map_at_5
value: 1.11
- type: mrr_at_1
value: 96.0
- type: mrr_at_10
value: 97.167
- type: mrr_at_100
value: 97.167
- type: mrr_at_1000
value: 97.167
- type: mrr_at_3
value: 96.667
- type: mrr_at_5
value: 97.167
- type: ndcg_at_1
value: 91.0
- type: ndcg_at_10
value: 81.69800000000001
- type: ndcg_at_100
value: 62.9
- type: ndcg_at_1000
value: 55.245999999999995
- type: ndcg_at_3
value: 86.397
- type: ndcg_at_5
value: 84.286
- type: precision_at_1
value: 96.0
- type: precision_at_10
value: 87.0
- type: precision_at_100
value: 64.86
- type: precision_at_1000
value: 24.512
- type: precision_at_3
value: 90.667
- type: precision_at_5
value: 88.8
- type: recall_at_1
value: 0.26
- type: recall_at_10
value: 2.238
- type: recall_at_100
value: 15.488
- type: recall_at_1000
value: 51.6
- type: recall_at_3
value: 0.716
- type: recall_at_5
value: 1.151
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 3.376
- type: map_at_10
value: 13.142000000000001
- type: map_at_100
value: 19.763
- type: map_at_1000
value: 21.319
- type: map_at_3
value: 6.805999999999999
- type: map_at_5
value: 8.952
- type: mrr_at_1
value: 46.939
- type: mrr_at_10
value: 61.082
- type: mrr_at_100
value: 61.45
- type: mrr_at_1000
value: 61.468999999999994
- type: mrr_at_3
value: 57.483
- type: mrr_at_5
value: 59.931999999999995
- type: ndcg_at_1
value: 44.897999999999996
- type: ndcg_at_10
value: 32.35
- type: ndcg_at_100
value: 42.719
- type: ndcg_at_1000
value: 53.30200000000001
- type: ndcg_at_3
value: 37.724999999999994
- type: ndcg_at_5
value: 34.79
- type: precision_at_1
value: 46.939
- type: precision_at_10
value: 28.366999999999997
- type: precision_at_100
value: 8.429
- type: precision_at_1000
value: 1.557
- type: precision_at_3
value: 38.095
- type: precision_at_5
value: 33.469
- type: recall_at_1
value: 3.376
- type: recall_at_10
value: 20.164
- type: recall_at_100
value: 50.668
- type: recall_at_1000
value: 83.159
- type: recall_at_3
value: 8.155
- type: recall_at_5
value: 11.872
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 66.739
- type: ap
value: 12.17931839228834
- type: f1
value: 51.05383188624636
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 56.72891907187323
- type: f1
value: 56.997614557150946
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 39.825318429345224
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 83.65619598259522
- type: cos_sim_ap
value: 66.17412885183877
- type: cos_sim_f1
value: 63.09125656951745
- type: cos_sim_precision
value: 57.63858577040594
- type: cos_sim_recall
value: 69.68337730870712
- type: dot_accuracy
value: 83.65619598259522
- type: dot_ap
value: 66.17413621964548
- type: dot_f1
value: 63.09125656951745
- type: dot_precision
value: 57.63858577040594
- type: dot_recall
value: 69.68337730870712
- type: euclidean_accuracy
value: 83.65619598259522
- type: euclidean_ap
value: 66.17412836413126
- type: euclidean_f1
value: 63.09125656951745
- type: euclidean_precision
value: 57.63858577040594
- type: euclidean_recall
value: 69.68337730870712
- type: manhattan_accuracy
value: 83.5548667819038
- type: manhattan_ap
value: 66.07998834521334
- type: manhattan_f1
value: 62.96433419721092
- type: manhattan_precision
value: 59.14676559239509
- type: manhattan_recall
value: 67.30870712401055
- type: max_accuracy
value: 83.65619598259522
- type: max_ap
value: 66.17413621964548
- type: max_f1
value: 63.09125656951745
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.55706911941631
- type: cos_sim_ap
value: 85.20971331546805
- type: cos_sim_f1
value: 77.28446050593702
- type: cos_sim_precision
value: 74.16135881104033
- type: cos_sim_recall
value: 80.6821681552202
- type: dot_accuracy
value: 88.55706911941631
- type: dot_ap
value: 85.2097154112633
- type: dot_f1
value: 77.28446050593702
- type: dot_precision
value: 74.16135881104033
- type: dot_recall
value: 80.6821681552202
- type: euclidean_accuracy
value: 88.55706911941631
- type: euclidean_ap
value: 85.20971719214488
- type: euclidean_f1
value: 77.28446050593702
- type: euclidean_precision
value: 74.16135881104033
- type: euclidean_recall
value: 80.6821681552202
- type: manhattan_accuracy
value: 88.52020025614158
- type: manhattan_ap
value: 85.17569799117058
- type: manhattan_f1
value: 77.27157773040933
- type: manhattan_precision
value: 72.79286638077734
- type: manhattan_recall
value: 82.33754234678165
- type: max_accuracy
value: 88.55706911941631
- type: max_ap
value: 85.20971719214488
- type: max_f1
value: 77.28446050593702
- task:
type: Clustering
dataset:
name: MTEB WikiCitiesClustering
type: jinaai/cities_wiki_clustering
config: default
split: test
revision: ddc9ee9242fa65332597f70e967ecc38b9d734fa
metrics:
- type: v_measure
value: 85.63474850264893
---
# yixuan-chia/snowflake-arctic-embed-m-long-Q8_0-GGUF
This model was converted to GGUF format from [`Snowflake/snowflake-arctic-embed-m-long`](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-long) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-long) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo yixuan-chia/snowflake-arctic-embed-m-long-Q8_0-GGUF --hf-file snowflake-arctic-embed-m-long-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo yixuan-chia/snowflake-arctic-embed-m-long-Q8_0-GGUF --hf-file snowflake-arctic-embed-m-long-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo yixuan-chia/snowflake-arctic-embed-m-long-Q8_0-GGUF --hf-file snowflake-arctic-embed-m-long-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo yixuan-chia/snowflake-arctic-embed-m-long-Q8_0-GGUF --hf-file snowflake-arctic-embed-m-long-q8_0.gguf -c 2048
```
|
quangtuyennguyen/my_awesome_qa_model
|
quangtuyennguyen
| 2024-08-29T03:44:24Z | 5 | 0 | null |
[
"tensorboard",
"safetensors",
"xlm-roberta",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"region:us"
] | null | 2024-08-29T00:41:00Z |
---
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3096
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.3423 | 1.0 | 500 | 2.5012 |
| 2.2342 | 2.0 | 1000 | 2.3128 |
| 1.8956 | 3.0 | 1500 | 2.3096 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
PotatoB/Model_Kinship_4-1
|
PotatoB
| 2024-08-29T03:34:49Z | 5 | 0 | null |
[
"safetensors",
"mistral",
"merge",
"mergekit",
"lazymergekit",
"PotatoB/evo_exp-point-2-1",
"PotatoB/evo_exp-point-3-2",
"license:apache-2.0",
"region:us"
] | null | 2024-08-29T03:31:59Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- PotatoB/evo_exp-point-2-1
- PotatoB/evo_exp-point-3-2
---
# evo_exp-point-4-5
evo_exp-point-4-5 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [PotatoB/evo_exp-point-2-1](https://huggingface.co/PotatoB/evo_exp-point-2-1)
* [PotatoB/evo_exp-point-3-2](https://huggingface.co/PotatoB/evo_exp-point-3-2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: PotatoB/evo_exp-point-2-1
layer_range: [0, 32]
- model: PotatoB/evo_exp-point-3-2
layer_range: [0, 32]
merge_method: slerp
base_model: PotatoB/evo_exp-point-2-1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
minhhien0811/deita-3366
|
minhhien0811
| 2024-08-29T03:29:03Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-29T03:26:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fitri-bt/fitri-phi
|
fitri-bt
| 2024-08-29T03:13:35Z | 39 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-29T03:10:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/OpenBuddy_-_openbuddy-deepseek-67b-v15.1-gguf
|
RichardErkhov
| 2024-08-29T03:11:44Z | 358 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-08-28T05:18:41Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
openbuddy-deepseek-67b-v15.1 - GGUF
- Model creator: https://huggingface.co/OpenBuddy/
- Original model: https://huggingface.co/OpenBuddy/openbuddy-deepseek-67b-v15.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [openbuddy-deepseek-67b-v15.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-deepseek-67b-v15.1-gguf/blob/main/openbuddy-deepseek-67b-v15.1.Q2_K.gguf) | Q2_K | 23.4GB |
| [openbuddy-deepseek-67b-v15.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-deepseek-67b-v15.1-gguf/blob/main/openbuddy-deepseek-67b-v15.1.IQ3_XS.gguf) | IQ3_XS | 25.95GB |
| [openbuddy-deepseek-67b-v15.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-deepseek-67b-v15.1-gguf/blob/main/openbuddy-deepseek-67b-v15.1.IQ3_S.gguf) | IQ3_S | 27.39GB |
| [openbuddy-deepseek-67b-v15.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-deepseek-67b-v15.1-gguf/blob/main/openbuddy-deepseek-67b-v15.1.Q3_K_S.gguf) | Q3_K_S | 27.3GB |
| [openbuddy-deepseek-67b-v15.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-deepseek-67b-v15.1-gguf/blob/main/openbuddy-deepseek-67b-v15.1.IQ3_M.gguf) | IQ3_M | 28.43GB |
| [openbuddy-deepseek-67b-v15.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-deepseek-67b-v15.1-gguf/blob/main/openbuddy-deepseek-67b-v15.1.Q3_K.gguf) | Q3_K | 30.41GB |
| [openbuddy-deepseek-67b-v15.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-deepseek-67b-v15.1-gguf/blob/main/openbuddy-deepseek-67b-v15.1.Q3_K_M.gguf) | Q3_K_M | 30.41GB |
| [openbuddy-deepseek-67b-v15.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-deepseek-67b-v15.1-gguf/blob/main/openbuddy-deepseek-67b-v15.1.Q3_K_L.gguf) | Q3_K_L | 33.13GB |
| [openbuddy-deepseek-67b-v15.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-deepseek-67b-v15.1-gguf/blob/main/openbuddy-deepseek-67b-v15.1.IQ4_XS.gguf) | IQ4_XS | 34.0GB |
| [openbuddy-deepseek-67b-v15.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-deepseek-67b-v15.1-gguf/blob/main/openbuddy-deepseek-67b-v15.1.Q4_0.gguf) | Q4_0 | 35.53GB |
| [openbuddy-deepseek-67b-v15.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-deepseek-67b-v15.1-gguf/blob/main/openbuddy-deepseek-67b-v15.1.IQ4_NL.gguf) | IQ4_NL | 35.86GB |
| [openbuddy-deepseek-67b-v15.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-deepseek-67b-v15.1-gguf/blob/main/openbuddy-deepseek-67b-v15.1.Q4_K_S.gguf) | Q4_K_S | 35.77GB |
| [openbuddy-deepseek-67b-v15.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-deepseek-67b-v15.1-gguf/tree/main/) | Q4_K | 37.66GB |
| [openbuddy-deepseek-67b-v15.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-deepseek-67b-v15.1-gguf/tree/main/) | Q4_K_M | 37.66GB |
| [openbuddy-deepseek-67b-v15.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-deepseek-67b-v15.1-gguf/tree/main/) | Q4_1 | 39.41GB |
| [openbuddy-deepseek-67b-v15.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-deepseek-67b-v15.1-gguf/tree/main/) | Q5_0 | 43.28GB |
| [openbuddy-deepseek-67b-v15.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-deepseek-67b-v15.1-gguf/tree/main/) | Q5_K_S | 43.28GB |
| [openbuddy-deepseek-67b-v15.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-deepseek-67b-v15.1-gguf/tree/main/) | Q5_K | 44.38GB |
| [openbuddy-deepseek-67b-v15.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-deepseek-67b-v15.1-gguf/tree/main/) | Q5_K_M | 44.38GB |
| [openbuddy-deepseek-67b-v15.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-deepseek-67b-v15.1-gguf/tree/main/) | Q5_1 | 47.16GB |
| [openbuddy-deepseek-67b-v15.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-deepseek-67b-v15.1-gguf/tree/main/) | Q6_K | 51.52GB |
| [openbuddy-deepseek-67b-v15.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-deepseek-67b-v15.1-gguf/tree/main/) | Q8_0 | 66.73GB |
Original model description:
---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
- fi
pipeline_tag: text-generation
inference: false
library_name: transformers
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-LLM/blob/548a39bdd03986297ea4e233a8b7676edd6bec3e/LICENSE-MODEL
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
Base model: https://huggingface.co/deepseek-ai/deepseek-llm-67b-base
License: [deepseek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/548a39bdd03986297ea4e233a8b7676edd6bec3e/LICENSE-MODEL)
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
|
cheelam/mms-tts-purepixel-finetuned
|
cheelam
| 2024-08-29T02:56:24Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vits",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-08-22T15:16:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TTTXXX01/Meta-Llama-3-8B-Instruct-MI-5e-7
|
TTTXXX01
| 2024-08-29T02:53:17Z | 5 | 0 | null |
[
"safetensors",
"llama",
"alignment_handbook-handbook",
"generated_from_trainer",
"dataset:princeton-nlp/llama3-ultrafeedback-armorm",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2024-08-29T02:46:44Z |
---
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- alignment_handbook-handbook
- generated_from_trainer
datasets:
- princeton-nlp/llama3-ultrafeedback-armorm
model-index:
- name: Meta-Llama-3-8B-Instruct-MI-5e-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/tengxiao01/huggingface/runs/rjb4skgf)
# Meta-Llama-3-8B-Instruct-MI-5e-7
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the princeton-nlp/llama3-ultrafeedback-armorm dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2118
- Rewards/chosen: -0.3920
- Rewards/rejected: -0.5306
- Rewards/accuracies: 0.7175
- Rewards/margins: 0.1385
- Logps/rejected: -0.5306
- Logps/chosen: -0.3920
- Logits/rejected: 0.0847
- Logits/chosen: 0.1025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 1.2128 | 0.8550 | 400 | 1.2118 | -0.3920 | -0.5306 | 0.7175 | 0.1385 | -0.5306 | -0.3920 | 0.0847 | 0.1025 |
### Framework versions
- Transformers 4.42.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
|
PeterTP/MN-12B-Lyra-v3-Q8_0-GGUF
|
PeterTP
| 2024-08-29T02:51:41Z | 8 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Sao10K/MN-12B-Lyra-v3",
"base_model:quantized:Sao10K/MN-12B-Lyra-v3",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-29T02:50:48Z |
---
base_model: Sao10K/MN-12B-Lyra-v3
language:
- en
license: cc-by-nc-4.0
tags:
- llama-cpp
- gguf-my-repo
---
# PeterTP/MN-12B-Lyra-v3-Q8_0-GGUF
This model was converted to GGUF format from [`Sao10K/MN-12B-Lyra-v3`](https://huggingface.co/Sao10K/MN-12B-Lyra-v3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Sao10K/MN-12B-Lyra-v3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo PeterTP/MN-12B-Lyra-v3-Q8_0-GGUF --hf-file mn-12b-lyra-v3-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo PeterTP/MN-12B-Lyra-v3-Q8_0-GGUF --hf-file mn-12b-lyra-v3-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo PeterTP/MN-12B-Lyra-v3-Q8_0-GGUF --hf-file mn-12b-lyra-v3-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo PeterTP/MN-12B-Lyra-v3-Q8_0-GGUF --hf-file mn-12b-lyra-v3-q8_0.gguf -c 2048
```
|
mradermacher/MagnumRPMerge-i1-GGUF
|
mradermacher
| 2024-08-29T02:12:18Z | 19 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-08-28T20:33:24Z |
---
base_model: DazzlingXeno/MagnumRPMerge
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DazzlingXeno/MagnumRPMerge
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MagnumRPMerge-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MagnumRPMerge-i1-GGUF/resolve/main/MagnumRPMerge.i1-IQ1_S.gguf) | i1-IQ1_S | 7.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MagnumRPMerge-i1-GGUF/resolve/main/MagnumRPMerge.i1-IQ1_M.gguf) | i1-IQ1_M | 8.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MagnumRPMerge-i1-GGUF/resolve/main/MagnumRPMerge.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/MagnumRPMerge-i1-GGUF/resolve/main/MagnumRPMerge.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/MagnumRPMerge-i1-GGUF/resolve/main/MagnumRPMerge.i1-IQ2_S.gguf) | i1-IQ2_S | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/MagnumRPMerge-i1-GGUF/resolve/main/MagnumRPMerge.i1-IQ2_M.gguf) | i1-IQ2_M | 11.9 | |
| [GGUF](https://huggingface.co/mradermacher/MagnumRPMerge-i1-GGUF/resolve/main/MagnumRPMerge.i1-Q2_K.gguf) | i1-Q2_K | 12.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MagnumRPMerge-i1-GGUF/resolve/main/MagnumRPMerge.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MagnumRPMerge-i1-GGUF/resolve/main/MagnumRPMerge.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/MagnumRPMerge-i1-GGUF/resolve/main/MagnumRPMerge.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MagnumRPMerge-i1-GGUF/resolve/main/MagnumRPMerge.i1-IQ3_S.gguf) | i1-IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MagnumRPMerge-i1-GGUF/resolve/main/MagnumRPMerge.i1-IQ3_M.gguf) | i1-IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/MagnumRPMerge-i1-GGUF/resolve/main/MagnumRPMerge.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MagnumRPMerge-i1-GGUF/resolve/main/MagnumRPMerge.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MagnumRPMerge-i1-GGUF/resolve/main/MagnumRPMerge.i1-IQ4_XS.gguf) | i1-IQ4_XS | 18.6 | |
| [GGUF](https://huggingface.co/mradermacher/MagnumRPMerge-i1-GGUF/resolve/main/MagnumRPMerge.i1-Q4_0.gguf) | i1-Q4_0 | 19.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MagnumRPMerge-i1-GGUF/resolve/main/MagnumRPMerge.i1-Q4_K_S.gguf) | i1-Q4_K_S | 19.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MagnumRPMerge-i1-GGUF/resolve/main/MagnumRPMerge.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MagnumRPMerge-i1-GGUF/resolve/main/MagnumRPMerge.i1-Q5_K_S.gguf) | i1-Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/MagnumRPMerge-i1-GGUF/resolve/main/MagnumRPMerge.i1-Q5_K_M.gguf) | i1-Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/MagnumRPMerge-i1-GGUF/resolve/main/MagnumRPMerge.i1-Q6_K.gguf) | i1-Q6_K | 28.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
yefo-ufpe/distilbert-base-uncased-swag-full
|
yefo-ufpe
| 2024-08-29T02:10:49Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"multiple-choice",
"trl",
"sft",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2024-08-28T23:09:42Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- trl
- sft
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-swag-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-swag-full
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7135
- Accuracy: 0.6963
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.918 | 1.0 | 4597 | 0.8133 | 0.6691 |
| 0.5775 | 2.0 | 9194 | 0.8260 | 0.6879 |
| 0.3129 | 3.0 | 13791 | 1.0329 | 0.6933 |
| 0.1728 | 4.0 | 18388 | 1.8823 | 0.6927 |
| 0.0824 | 5.0 | 22985 | 2.7135 | 0.6963 |
### Framework versions
- Transformers 4.45.0.dev0
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
ndeclarke/whisper-base-malayalam-colab-CV17.0
|
ndeclarke
| 2024-08-29T02:07:45Z | 5 | 0 | null |
[
"tensorboard",
"safetensors",
"whisper",
"generated_from_trainer",
"dataset:common_voice_17_0",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-08-28T23:46:31Z |
---
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
datasets:
- common_voice_17_0
metrics:
- wer
model-index:
- name: whisper-base-malayalam-colab-CV17.0
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_17_0
type: common_voice_17_0
config: ml
split: test
args: ml
metrics:
- name: Wer
type: wer
value: 0.7675693101225016
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-malayalam-colab-CV17.0
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the common_voice_17_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4369
- Wer: 0.7676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.15
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 1.0335 | 1.5748 | 200 | 0.4105 | 0.9504 |
| 0.2301 | 3.1496 | 400 | 0.3121 | 0.8417 |
| 0.0954 | 4.7244 | 600 | 0.2964 | 0.8288 |
| 0.0442 | 6.2992 | 800 | 0.3350 | 0.7843 |
| 0.0217 | 7.8740 | 1000 | 0.3740 | 0.8133 |
| 0.0104 | 9.4488 | 1200 | 0.3858 | 0.7782 |
| 0.0048 | 11.0236 | 1400 | 0.4128 | 0.7747 |
| 0.002 | 12.5984 | 1600 | 0.4319 | 0.7747 |
| 0.0006 | 14.1732 | 1800 | 0.4324 | 0.7701 |
| 0.0002 | 15.7480 | 2000 | 0.4369 | 0.7676 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
backyardai/L3.1-8B-Niitama-v1.1-GGUF
|
backyardai
| 2024-08-29T01:47:02Z | 217 | 0 | null |
[
"gguf",
"en",
"base_model:Sao10K/L3.1-8B-Niitama-v1.1",
"base_model:quantized:Sao10K/L3.1-8B-Niitama-v1.1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-16T06:07:23Z |
---
base_model: Sao10K/L3.1-8B-Niitama-v1.1
language:
- en
license: cc-by-nc-4.0
model_name: L3.1-8B-Niitama-v1.1-GGUF
quantized_by: brooketh
parameter_count: 8030261312
---
<img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;">
**<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>**
<p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p>
<p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p>
***
# L3.1 Niitama V1.1 8B
- **Creator:** [Sao10K](https://huggingface.co/Sao10K/)
- **Original:** [L3.1 Niitama V1.1 8B](https://huggingface.co/Sao10K/L3.1-8B-Niitama-v1.1)
- **Date Created:** 2024-08-03
- **Trained Context:** 131072 tokens
- **Description:** Version 1.1 of a very experimental model using experimental methods. Based on Llama-3 8B. Very quirky and unpredictable; may not be to everyone's taste.
***
## What is a GGUF?
GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware.
GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight.
***
<img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;">
## Backyard AI
- Free, local AI chat application.
- One-click installation on Mac and PC.
- Automatically use GPU for maximum speed.
- Built-in model manager.
- High-quality character hub.
- Zero-config desktop-to-mobile tethering.
Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable.
**Join us on [Discord](https://discord.gg/SyNN2vC9tQ)**
***
|
alban12/nllb-200-distilled-600M-mt-finetuned-zindi-dyu-to-fr
|
alban12
| 2024-08-29T01:44:18Z | 104 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"m2m_100",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:facebook/nllb-200-distilled-600M",
"base_model:finetune:facebook/nllb-200-distilled-600M",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-08-28T18:08:10Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/nllb-200-distilled-600M
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: nllb-200-distilled-600M-mt-finetuned-zindi-dyu-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb-200-distilled-600M-mt-finetuned-zindi-dyu-to-fr
This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2584
- Bleu: 6.4075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 3.1707 | 0.1575 | 20 | 2.7356 | 4.8084 |
| 2.9074 | 0.3150 | 40 | 2.5883 | 5.0141 |
| 2.7168 | 0.4724 | 60 | 2.4902 | 5.5785 |
| 2.6912 | 0.6299 | 80 | 2.4154 | 5.7743 |
| 2.6062 | 0.7874 | 100 | 2.3742 | 6.0010 |
| 2.5794 | 0.9449 | 120 | 2.3480 | 6.1354 |
| 2.4634 | 1.1024 | 140 | 2.3314 | 5.9899 |
| 2.5055 | 1.2598 | 160 | 2.3167 | 6.1080 |
| 2.5062 | 1.4173 | 180 | 2.3032 | 6.3784 |
| 2.4771 | 1.5748 | 200 | 2.2944 | 6.4510 |
| 2.4284 | 1.7323 | 220 | 2.2854 | 6.2883 |
| 2.4423 | 1.8898 | 240 | 2.2783 | 6.5036 |
| 2.3202 | 2.0472 | 260 | 2.2730 | 6.4039 |
| 2.3855 | 2.2047 | 280 | 2.2701 | 6.2921 |
| 2.4292 | 2.3622 | 300 | 2.2658 | 6.3025 |
| 2.3678 | 2.5197 | 320 | 2.2626 | 6.2881 |
| 2.4158 | 2.6772 | 340 | 2.2600 | 6.3684 |
| 2.351 | 2.8346 | 360 | 2.2588 | 6.2852 |
| 2.3755 | 2.9921 | 380 | 2.2584 | 6.2819 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
rusticluftig/700m-better
|
rusticluftig
| 2024-08-29T01:43:18Z | 163 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-28T05:09:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mc0c0z/Medical-Depth-Anything-V2-Small
|
mc0c0z
| 2024-08-29T01:28:50Z | 132 | 0 |
transformers
|
[
"transformers",
"safetensors",
"depth_anything",
"depth-estimation",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
depth-estimation
| 2024-08-18T11:39:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
agentlans/Llama3-vodka
|
agentlans
| 2024-08-29T01:19:26Z | 8 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-3",
"uncensored",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-11T04:14:34Z |
---
library_name: transformers
tags:
- llama
- llama-3
- uncensored
- mergekit
- merge
---
# Llama3-vodka
- Input: text only
- Output: text only
This model is like vodka. It aims to be pure, potent, and versatile.
- Pure: shouldn't greatly affect Llama 3 Instruct's capabilities and writing style except for uncensoring
- Potent: it's a merge of abliterated models - it should stay uncensored after merging and finetuning
- Versatile: basically Llama 3 Instruct except uncensored - drink it straight, mix it, finetune it, and make cocktails
Please enjoy responsibly.
## Safety and risks
- Excessive consumption is bad for your health
- The model can produce harmful, offensive, or inappropriate content if prompted to do so
- The model has weakened safeguards and a lack of moral and ethical judgements
- The user takes responsibility for all outputs produced by the model
- It is recommended to use the model in controlled environments where its risks can be safely managed
## Models used:
- [cognitivecomputations/Llama-3-8B-Instruct-abliterated-v2](https://huggingface.co/cognitivecomputations/Llama-3-8B-Instruct-abliterated-v2)
- [failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3)
- Meta-Llama-3-Daredevil-8B-abliterated-Instruct-16, which is Llama 3 8B Instruct with
- rank 32 LoRA of [Meta-Llama-3-Daredevil-8B-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) vs. [Meta-Llama-3-Daredevil](https://huggingface.co/mlabonne/Daredevil-8B)
- rank 16 LoRA of Llama 3 8B Instruct vs. Llama 3 8B Base
The above models were merged onto [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) using the "task arithmetic" merge method. The model merges and LoRA extractions were done using [mergekit](https://github.com/arcee-ai/mergekit).
|
agentlans/Llama3.1-vodka
|
agentlans
| 2024-08-29T01:17:15Z | 5 | 3 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-3",
"uncensored",
"mergekit",
"merge",
"conversational",
"base_model:NousResearch/Meta-Llama-3.1-8B-Instruct",
"base_model:merge:NousResearch/Meta-Llama-3.1-8B-Instruct",
"base_model:agentlans/Llama3-vodka",
"base_model:merge:agentlans/Llama3-vodka",
"base_model:grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter",
"base_model:merge:grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter",
"base_model:mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated",
"base_model:merge:mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-23T06:33:02Z |
---
base_model:
- mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
- grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter
- agentlans/Llama3-vodka
- NousResearch/Meta-Llama-3.1-8B-Instruct
library_name: transformers
tags:
- llama
- llama-3
- uncensored
- mergekit
- merge
---
# Llama3.1-vodka
- Input: text only
- Output: text only
This model is like vodka. It aims to be pure, potent, and versatile.
- Pure: shouldn't greatly affect Llama 3.1 Instruct's capabilities and writing style except for uncensoring
- Potent: it's a merge of abliterated models - it should stay uncensored after merging and finetuning
- Versatile: basically Llama 3.1 Instruct except uncensored - drink it straight, mix it, finetune it, and make cocktails
Please enjoy responsibly.
Note that this model may still censor at times. If that's undesirable, tell the AI to be more uncensored and uninhibited.
## Safety and risks
- Excessive consumption is bad for your health
- The model can produce harmful, offensive, or inappropriate content if prompted to do so
- The model has weakened safeguards and a lack of moral and ethical judgements
- The user takes responsibility for all outputs produced by the model
- It is recommended to use the model in controlled environments where its risks can be safely managed
## Models used:
- [mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated)
- `Llama-3.1-8B-Instruct-abliterated_via_adapter2` (Llama 3.1 adapted version of [grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter](https://huggingface.co/grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter))
- `Llama3.1-vodka-ported2` (Llama 3.1 adapted verison of [agentlans/Llama3-vodka](https://huggingface.co/agentlans/Llama3-vodka))
The above models were merged onto [NousResearch/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3.1-8B-Instruct) using the "task arithmetic" merge method. The model merges and LoRA extractions were done using [mergekit](https://github.com/arcee-ai/mergekit).
|
mc0c0z/Medical-Depth-Anything-V2-Small-Frozen-Encoder
|
mc0c0z
| 2024-08-29T01:05:40Z | 135 | 0 |
transformers
|
[
"transformers",
"safetensors",
"depth_anything",
"depth-estimation",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
depth-estimation
| 2024-08-18T11:36:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
John6666/flux1-dev-minus-v1-fp8-flux
|
John6666
| 2024-08-29T01:04:06Z | 146 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"Flux",
"fp8",
"float8_e4m3fn",
"en",
"base_model:bluepen5805/FLUX.1-dev-minus",
"base_model:finetune:bluepen5805/FLUX.1-dev-minus",
"license:other",
"endpoints_compatible",
"diffusers:FluxPipeline",
"region:us"
] |
text-to-image
| 2024-08-29T00:54:58Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- Flux
- fp8
- float8_e4m3fn
base_model: bluepen5805/FLUX.1-dev-minus
---
Original model is [here](https://huggingface.co/bluepen5805/FLUX.1-dev-minus).
This model created by [bluepen5805](https://huggingface.co/bluepen5805).
## Notice
This is an experimental conversion in Spaces using a homebrew script. serverless Inference API does not currently support torch float8_e4m3fn, so it does not work.
I have not been able to confirm if the conversion is working properly.
Please consider this as a test run only.
|
QuantFactory/opus-v1.2-llama-3-8b-GGUF
|
QuantFactory
| 2024-08-29T00:42:39Z | 89 | 1 | null |
[
"gguf",
"unsloth",
"axolotl",
"text-generation",
"en",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-04-24T23:57:52Z |
---
language:
- en
pipeline_tag: text-generation
tags:
- unsloth
- axolotl
license: cc-by-nc-nd-4.0
---

# QuantFactory/opus-v1.2-llama-3-8b-GGUF
This is quantized version of [dreamgen/opus-v1.2-llama-3-8b](https://huggingface.co/dreamgen/opus-v1.2-llama-3-8b) created using llama.cpp
# Original Model Card
# Llama 3 DreamGen Opus
> ## 🚨 WARNING 🚨
>
> This model has issues, please use the following preview models instead:
> - [New train on top of Llama 3 8B Base](https://huggingface.co/dreamgen-preview/opus-v1.2-llama-3-8b-base-run3.4-epoch2)
> - [New train on top of Llama 3 8B Instruct](https://huggingface.co/dreamgen-preview/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5)
>
> Make sure to read [this discussion](https://huggingface.co/dreamgen/opus-v1.2-llama-3-8b/discussions/3#6622914ac2925305f6d8b86c) if the model won't stop generating output.
<div style="display: flex; flex-direction: row; align-items: center;">
<img src="/dreamgen/opus-v1.2-llama-3-8b/resolve/main/images/logo-1024.png" alt="model logo" style="
border-radius: 12px;
margin-right: 12px;
margin-top: 0px;
margin-bottom: 0px;
max-width: 100px;
height: auto;
"/>
Models for **(steerable) story-writing and role-playing**.
<br/>[All Opus V1 models, including quants](https://huggingface.co/collections/dreamgen/opus-v1-65d092a6f8ab7fc669111b31).
</div>
## Resources
- [**Opus V1 prompting guide**](https://dreamgen.com/docs/models/opus/v1) with many (interactive) examples and prompts that you can copy.
- [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing) for interactive role-play using `opus-v1.2-7b`.
- [Python code](example/prompt/format.py) to format the prompt correctly.
- Join the community on [**Discord**](https://dreamgen.com/discord) to get early access to new models.
<img src="/dreamgen/opus-v1.2-llama-3-8b/resolve/main/images/story_writing.webp" alt="story writing on dreamgen.com" style="
padding: 12px;
border-radius: 12px;
border: 2px solid #f9a8d4;
background: rgb(9, 9, 11);
"/>
## Prompting
<details>
<summary>The models use an extended version of ChatML.</summary>
```
<|im_start|>system
(Story description in the right format here)
(Typically consists of plot description, style description and characters)<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Alice
(Continuation of the story from the Alice character)<|im_end|>
<|im_start|>text
(Continuation of the story from no character in particular (pure narration))<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Bob
(Continuation of the story from the Bob character)<|im_end|>
```
The Opus V1 extension is the addition of the `text` role, and the addition / modification of role names.
Pay attention to the following:
- The `text` messages can (but don't have to have) `names`, names are used to indicate the "active" character during role-play.
- There can be multiple subsequent message with a `text` role, especially if names are involved.
- There can be multiple names attached to a message.
- The format for names is `names= {{name[0]}}; {{name[1]}}`, beware of the spaces after `names=` and after the `;`. This spacing leads to most natural tokenization for the names.
</details>
While the main goal for the models is great story-writing and role-playing performance, the models are also capable of several writing related tasks as well as general assistance.
Here's how you can prompt the model for the following tasks
### Steerable Story-writing and Role-playing:
- [Story-writing prompt guide](https://dreamgen.com/docs/models/opus/v1#task-story-writing)
- [Role-playing prompt guide](https://dreamgen.com/docs/models/opus/v1#task-role-playing)
- Input:
- System prompt: You provide story / role-play description, which consists of:
- Plot description
- Style description
- Characters and their descriptions
- Conversation turns:
- Text / message turn: This represents part of the story or role play
- Instruction: This tells the model what should happen next
- Output: Continuation of the story / role-play.
### Story plot summarization
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-plot-description)
- Input: A story, or a few chapters of a story.
- Output: A description of the story or chapters.
#### Examples:
<details>
<summary>Summary of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">short fiction story</a> in 50 words</summary>
<blockquote>
Malygris, a powerful magician, summons his long-dead lover Nylissa from the grave using necromancy. However, upon seeing her, he realizes that she is not the same person he once knew, leading him to question the nature of love, memory, and the power of magic.
</blockquote>
</details>
<details>
<summary>Summary of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">short fiction story</a> in 200 words</summary>
<blockquote>
The story is a dark fantasy set in the city of Susran, where an ancient and powerful magician named Malygris dwells in a tower atop a conical hill. Malygris, who has mastered all forms of sorcery and necromancy, is plagued by a deep sense of ennui and melancholy. He seeks solace in the memory of Nylissa, a girl he once loved in his youth.
One day, Malygris decides to summon Nylissa's spirit from the grave using his necromantic powers. He performs the necessary rituals and incantations, and Nylissa appears before him as a ghostly apparition. However, as Malygris gazes upon her, he becomes increasingly disillusioned. He notices subtle changes in her appearance and behavior that do not match his memories of her. He begins to question whether this is truly the same Nylissa he once knew, or if it is just another illusion conjured by his magic.
As Malygris grapples with this uncertainty, he becomes increasingly despondent and disillusioned with life itself. He realizes that even his magic cannot bring back the past or make him truly happy. In the end, he dismisses Nylissa's spirit and returns to his lonely tower, resigned to his fate as an old and weary sorcerer.
</blockquote>
</details>
<details>
<summary>Summary of <a href="https://edition.cnn.com/2024/02/29/health/pets-diseases-viruses-bacteria-wellness/index.html">CNN News article</a> in 150 words</summary>
<blockquote>
The article discusses the recent human deaths from diseases transmitted by animals, specifically Alaskapox and bubonic plague. Dr. Leana Wen, an emergency physician and former Baltimore health commissioner, explains the various ways household pets can transmit diseases to humans. These include fecal-oral transmission, scratches and bites, and vector-borne diseases like ticks, fleas, and mosquitoes. Wen emphasizes the importance of vaccinating pets, maintaining cleanliness, and taking commonsense precautions to reduce disease transmission. She also advises that individuals who are immunocompromised, babies, and pregnant women should take additional care when interacting with animals.
</blockquote>
</details>
<details>
<summary>Summary of <a href="https://edition.cnn.com/2024/02/29/health/pets-diseases-viruses-bacteria-wellness/index.html">CNN News article</a> in 50 words</summary>
<blockquote>
A doctor discusses the risks of diseases transmitted from animals to humans, focusing on household pets like cats and dogs. They emphasize the importance of vaccination, hygiene, and regular veterinary care to prevent these illnesses.
</blockquote>
</details>
### Story character description
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-char-description)
- Input: A story, or a few chapters of a story, set of characters.
- Output: A description of the characters.
#### Examples:
<details>
<summary>Description of the Malygris character from <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">this story</a> in 50 words</summary>
<blockquote>
Malygris is an ancient and powerful magician, characterized by his long white beard, cold green eyes, and a violet robe. He is a master of necromancy and possesses a vast knowledge of forbidden magic. Despite his immense power, he is plagued by a deep sense of melancholy and ennui.
</blockquote>
</details>
<details>
<summary>Description of the prince character from <a href="https://www.libraryofshortstories.com/onlinereader/the-pearl-of-love">this story</a> in 50 words</summary>
<blockquote>
The prince is a devoted and determined individual, driven by his love for his late queen. He is described as a graceful youth in the beginning, but as he grows older, his dedication to the Pearl of Love transforms him into a grave and intent man. His sense of color and design evolves over time, reflecting his growing maturity and artistic vision.
</blockquote>
</details>
### Story style description
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-style-description)
- Input: A story, or a few chapters of a story.
- Output: A description the style of the story.
<details>
<summary>Style description of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">this story</a></summary>
<blockquote>
The story is written in a descriptive and evocative style, with a focus on the atmosphere and setting. The author uses vivid and detailed language to create a sense of place and time, and employs a variety of literary devices such as similes, metaphors, and allusions to enhance the mood and tone of the narrative. The sentence structure is varied, with a mix of short and long sentences that create a rhythmic flow to the prose.
</blockquote>
</details>
<details>
<summary>Style description of <a href="https://www.libraryofshortstories.com/onlinereader/the-pearl-of-love">this story</a></summary>
<blockquote>
The writing style is descriptive and evocative, with a focus on the beauty and grandeur of the Pearl of Love. The author uses vivid imagery and sensory details to create a rich and immersive atmosphere. The tone is reverential and contemplative, reflecting the prince's deep love for his queen and his dedication to creating a lasting monument to her memory.
</blockquote>
</details>
### Story description to chapters
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-story-description-to-chapter-descriptions)
- Input: A brief plot description and the desired number of chapters.
- Output: A description for each chapter.
### And more...
## Sampling params
For story-writing and role-play, I recommend "Min P" based sampling with `min_p` in the range `[0.01, 0.1]` and with `temperature` in the range `[0.5, 1.5]`, depending on your preferences. A good starting point would be `min_p=0.1; temperature=0.8`.
You may also benefit from setting presence, frequency and repetition penalties, especially at lower temperatures.
## Dataset
The fine-tuning dataset consisted of ~100M tokens of steerable story-writing, role-playing, writing-assistant and general-assistant examples. Each example was up to 31000 tokens long.
All story-writing and role-playing examples were based on human-written text.

## Running the model
The model is should be compatible with any software that supports the base model, but beware of prompting and tokenization.
I recommend using these model versions:
- 7B: [no quant (opus-v1.2-7b)](https://huggingface.co/dreamgen/opus-v1.2-7b)
- 34B: [no quant (opus-v1-34b)](https://huggingface.co/dreamgen/opus-v1-34b) or [awq (opus-v1-34b-awq)](https://huggingface.co/dreamgen/opus-v1-34b-awq)
- 34B: [no quant (opus-v1.2-70b)](https://huggingface.co/dreamgen/opus-v1.2-70b) or [awq (opus-v1.2-70b-awq)](https://huggingface.co/dreamgen/opus-v1.2-70b-awq)
### Running on DreamGen.com (free)
You can run the models on [dreamgen.com](https://dreamgen.com) for free — you can use the built-in UI for story-writing & role-playing, or use [the API](https://dreamgen.com/docs/api).
### Running Locally
- **Make sure your prompt is as close as possible to the Opus V1**
- Regardless of which backend you use, it's important that you format your prompt well and that the tokenization works correctly.
- [Read the prompt guide](https://dreamgen.com/docs/models/opus/v1)
- [Read the prompt formatting code](example/prompt/format.py)
- Make sure `<|im_start|>` and `<|im_end|>` are tokenized correctly
- **vLLM**
- [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing): This is a simple interactive Google Colab to do role-play with the 7B model, it should fit on the T4 GPU.
- [Code](example/prompt/interactive.py): This is simple script for interactive chat for one hard-coded scenario.
- **SillyTavern**
- [Official SillyTavern documentation for DreamGen](https://docs.sillytavern.app/usage/api-connections/dreamgen/) -- applies to both the API an local models
- SillyTavern (staging) comes with built-in DreamGen preset for RP
- Other presets can be found [here](https://huggingface.co/dreamgen/opus-v1.2-llama-3-8b/tree/main/configs/silly_tavern), v2 kindly provided by @MarinaraSpaghetti
- Make sure to unselect `Skip special tokens`, otherwise it won't work
- This is just an attempt at approximating the Opus V1 prompt, it won't be perfect
- Character cards specifically rewritten for the built-in DreamGen preset:
- [Seraphina](configs/silly_tavern/cards/Seraphina.png) (based on the default Seraphina card)
- [Lara Lightland](configs/silly_tavern/cards/LaraLightland.png) (based on the card by Deffcolony)
- **LM Studio**
- [Config](configs/lmstudio/preset.json)
- Just like ChatML, just changed "assistant" to "text" role.
- **There's a bug** in LM Studio if you delete a message or click "Continue", [see here for details](https://discord.com/channels/1110598183144399058/1212665261128417280/1212665261128417280).
- **HuggingFace**
- [Chat template](tokenizer_config.json#L51)
- Just like ChatML, just changed "assistant" to "text" role.
## Known Issues
- **34B repetition**:
- The 34B sometimes gets stuck repeating the same word, or synonyms. This seems to be a common problem across various Yi 34B fine-tunes.
- **GGUF**:
- The tokenization might be messed up. Some users reported that `<|im_start|>` and `<|im_end|>` are tokenized as multiple tokens. Also llama.cpp may not tokenize correctly (the Yi tokenizer is subtly different from the Llama 2 tokenizer).
## License
- This model is intended for personal use only, other use is not permitted.
|
leonzhou286/llama3_8b_instruct_moe
|
leonzhou286
| 2024-08-29T00:31:33Z | 8 | 0 | null |
[
"safetensors",
"llama_moe",
"en",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:mit",
"region:us"
] | null | 2024-08-29T00:18:00Z |
---
base_model: meta-llama/Meta-Llama-3-8B-Instruct
language:
- en
license: mit
---
# Llama 3 8b Instruct MOE
Llama 3 8b Instruct base model converted to MOE style by randomly partitioning the FFN layers of each decoder layer into 8 experts of the same size. Weights are directly taken from the llama3 instruct base model.
|
RichardErkhov/mncai_-_agiin-13.6B-v0.0-gguf
|
RichardErkhov
| 2024-08-29T00:21:06Z | 15 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-28T19:36:00Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
agiin-13.6B-v0.0 - GGUF
- Model creator: https://huggingface.co/mncai/
- Original model: https://huggingface.co/mncai/agiin-13.6B-v0.0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [agiin-13.6B-v0.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-13.6B-v0.0-gguf/blob/main/agiin-13.6B-v0.0.Q2_K.gguf) | Q2_K | 4.77GB |
| [agiin-13.6B-v0.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-13.6B-v0.0-gguf/blob/main/agiin-13.6B-v0.0.IQ3_XS.gguf) | IQ3_XS | 5.3GB |
| [agiin-13.6B-v0.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-13.6B-v0.0-gguf/blob/main/agiin-13.6B-v0.0.IQ3_S.gguf) | IQ3_S | 5.6GB |
| [agiin-13.6B-v0.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-13.6B-v0.0-gguf/blob/main/agiin-13.6B-v0.0.Q3_K_S.gguf) | Q3_K_S | 5.57GB |
| [agiin-13.6B-v0.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-13.6B-v0.0-gguf/blob/main/agiin-13.6B-v0.0.IQ3_M.gguf) | IQ3_M | 5.78GB |
| [agiin-13.6B-v0.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-13.6B-v0.0-gguf/blob/main/agiin-13.6B-v0.0.Q3_K.gguf) | Q3_K | 6.2GB |
| [agiin-13.6B-v0.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-13.6B-v0.0-gguf/blob/main/agiin-13.6B-v0.0.Q3_K_M.gguf) | Q3_K_M | 6.2GB |
| [agiin-13.6B-v0.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-13.6B-v0.0-gguf/blob/main/agiin-13.6B-v0.0.Q3_K_L.gguf) | Q3_K_L | 6.75GB |
| [agiin-13.6B-v0.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-13.6B-v0.0-gguf/blob/main/agiin-13.6B-v0.0.IQ4_XS.gguf) | IQ4_XS | 6.96GB |
| [agiin-13.6B-v0.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-13.6B-v0.0-gguf/blob/main/agiin-13.6B-v0.0.Q4_0.gguf) | Q4_0 | 7.26GB |
| [agiin-13.6B-v0.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-13.6B-v0.0-gguf/blob/main/agiin-13.6B-v0.0.IQ4_NL.gguf) | IQ4_NL | 7.33GB |
| [agiin-13.6B-v0.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-13.6B-v0.0-gguf/blob/main/agiin-13.6B-v0.0.Q4_K_S.gguf) | Q4_K_S | 7.31GB |
| [agiin-13.6B-v0.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-13.6B-v0.0-gguf/blob/main/agiin-13.6B-v0.0.Q4_K.gguf) | Q4_K | 7.71GB |
| [agiin-13.6B-v0.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-13.6B-v0.0-gguf/blob/main/agiin-13.6B-v0.0.Q4_K_M.gguf) | Q4_K_M | 7.71GB |
| [agiin-13.6B-v0.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-13.6B-v0.0-gguf/blob/main/agiin-13.6B-v0.0.Q4_1.gguf) | Q4_1 | 8.05GB |
| [agiin-13.6B-v0.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-13.6B-v0.0-gguf/blob/main/agiin-13.6B-v0.0.Q5_0.gguf) | Q5_0 | 8.84GB |
| [agiin-13.6B-v0.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-13.6B-v0.0-gguf/blob/main/agiin-13.6B-v0.0.Q5_K_S.gguf) | Q5_K_S | 8.84GB |
| [agiin-13.6B-v0.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-13.6B-v0.0-gguf/blob/main/agiin-13.6B-v0.0.Q5_K.gguf) | Q5_K | 9.08GB |
| [agiin-13.6B-v0.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-13.6B-v0.0-gguf/blob/main/agiin-13.6B-v0.0.Q5_K_M.gguf) | Q5_K_M | 9.08GB |
| [agiin-13.6B-v0.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-13.6B-v0.0-gguf/blob/main/agiin-13.6B-v0.0.Q5_1.gguf) | Q5_1 | 9.64GB |
| [agiin-13.6B-v0.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-13.6B-v0.0-gguf/blob/main/agiin-13.6B-v0.0.Q6_K.gguf) | Q6_K | 10.53GB |
| [agiin-13.6B-v0.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/mncai_-_agiin-13.6B-v0.0-gguf/blob/main/agiin-13.6B-v0.0.Q8_0.gguf) | Q8_0 | 13.64GB |
Original model description:
---
license: apache-2.0
language:
- en
---
# Model Card for mncai/agiin-13.6B-v0.0
### Introduction of MindsAndCompany
https://mnc.ai/
We create various AI models and develop solutions that can be applied to businesses. And as for generative AI, we are developing products like Code Assistant, TOD Chatbot, LLMOps, and are in the process of developing Enterprise AGI (Artificial General Intelligence).
### Model Summary
based mistral arch. pretrain, instruction tuned and dpo.
### How to Use
Here give some examples of how to use our model.
```python
from transformers import AutoConfig, AutoModel, AutoTokenizer
import transformers
import torch
hf_model = 'mncai/agiin-13.6B-v0.0'
message = "<|user|>\n두 개의 구가 있는데 각각 지름이 1, 2일때 각 구의 부피는 몇배야? 설명도 같이 해줘.\n<|assistant|>\n"
sequences = pipeline(
message,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=2048,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
### Contact
If you have any questions, please raise an issue or contact us at [email protected]
|
yefo-ufpe/bert-base-uncased-swag-full
|
yefo-ufpe
| 2024-08-29T00:06:16Z | 104 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"multiple-choice",
"trl",
"sft",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2024-08-29T00:05:56Z |
---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- trl
- sft
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-swag-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-swag-full
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8572
- Accuracy: 0.7760
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7762 | 1.0 | 4597 | 0.6281 | 0.7516 |
| 0.4259 | 2.0 | 9194 | 0.6857 | 0.7668 |
| 0.2108 | 3.0 | 13791 | 0.9799 | 0.7689 |
| 0.1207 | 4.0 | 18388 | 1.5455 | 0.7721 |
| 0.0523 | 5.0 | 22985 | 1.8572 | 0.7760 |
### Framework versions
- Transformers 4.45.0.dev0
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
John6666/wai-25d-pdxl-v10-sdxl
|
John6666
| 2024-08-29T00:05:47Z | 125 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"2.5D",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-08-28T23:55:10Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- 2.5D
- pony
---
Original model is [here](https://civitai.com/models/696083/wai-25d-pdxl?modelVersionId=778958).
This model created by [WAI0731](https://civitai.com/user/WAI0731).
|
backyardai/mini-magnum-12b-v1.1-GGUF
|
backyardai
| 2024-08-29T00:03:41Z | 295 | 1 | null |
[
"gguf",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ru",
"zh",
"ja",
"base_model:intervitens/mini-magnum-12b-v1.1",
"base_model:quantized:intervitens/mini-magnum-12b-v1.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-28T23:49:13Z |
---
base_model: intervitens/mini-magnum-12b-v1.1
language:
- en
- fr
- de
- es
- it
- pt
- ru
- zh
- ja
license: apache-2.0
model_name: mini-magnum-12b-v1.1-GGUF
quantized_by: brooketh
parameter_count: 12247792640
---
<img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;">
**<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>**
<p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p>
<p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p>
***
# Mini Magnum V1.1 12B
- **Creator:** [intervitens](https://huggingface.co/intervitens/)
- **Original:** [Mini Magnum V1.1 12B](https://huggingface.co/intervitens/mini-magnum-12b-v1.1)
- **Date Created:** 2024-07-24
- **Trained Context:** 1024000 tokens
- **Description:** This model is the miniature version of alpindale/magnum-72b-v1, a second entry in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of Mistral-Nemo-Base-2407. A new general purpose instruction dataset by kalomaze was added to the training mix for better coherence and general alignment.
***
## What is a GGUF?
GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware.
GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight.
***
<img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;">
## Backyard AI
- Free, local AI chat application.
- One-click installation on Mac and PC.
- Automatically use GPU for maximum speed.
- Built-in model manager.
- High-quality character hub.
- Zero-config desktop-to-mobile tethering.
Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable.
**Join us on [Discord](https://discord.gg/SyNN2vC9tQ)**
***
|
John6666/ether-pdxl-a3-sdxl
|
John6666
| 2024-08-29T00:02:54Z | 223 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"semi-realistic",
"2.5D",
"illustration",
"cute",
"colorful",
"portrait",
"pony",
"en",
"base_model:gamerdan69/EtherMix",
"base_model:finetune:gamerdan69/EtherMix",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-08-28T23:50:27Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- semi-realistic
- 2.5D
- illustration
- cute
- colorful
- portrait
- pony
base_model: gamerdan69/EtherMix
---
Original model is [here](https://huggingface.co/gamerdan69/EtherMix) and on [Civitai](https://civitai.com/models/545628?modelVersionId=778308).
This model created by [gamerdan69](https://civitai.com/user/gamerdan69).
|
John6666/beyond-experimental-v28loramerge-sdxl
|
John6666
| 2024-08-29T00:00:38Z | 123 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"realism",
"anime",
"cartoon",
"styles",
"SDXL Turbo",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-08-28T23:52:20Z |
---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- realism
- anime
- cartoon
- styles
- SDXL Turbo
---
Original model is [here](https://civitai.com/models/424895/beyond-experimental?modelVersionId=780496).
This model created by [OperationNova](https://civitai.com/user/OperationNova).
|
John6666/azoth-final-sdxl
|
John6666
| 2024-08-28T23:56:08Z | 213 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"cute",
"backgrounds",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-08-28T23:51:11Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- cute
- backgrounds
- pony
---
Original model is [here](https://civitai.com/models/696173/azoth?modelVersionId=779057).
This model created by [renmei](https://civitai.com/user/renmei).
|
John6666/ydy-mix-ydyxlvapl2t-sdxl
|
John6666
| 2024-08-28T23:41:56Z | 149 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"cute",
"merge",
"pony",
"en",
"base_model:yodayo-ai/clandestine-xl-1.0",
"base_model:merge:yodayo-ai/clandestine-xl-1.0",
"base_model:yodayo-ai/holodayo-xl-2.1",
"base_model:merge:yodayo-ai/holodayo-xl-2.1",
"base_model:yodayo-ai/kivotos-xl-2.0",
"base_model:merge:yodayo-ai/kivotos-xl-2.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-08-28T23:37:17Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- cute
- merge
- pony
base_model:
- yodayo-ai/kivotos-xl-2.0
- yodayo-ai/holodayo-xl-2.1
- yodayo-ai/clandestine-xl-1.0
---
Original model is [here](https://civitai.com/models/641249/ydy-mix?modelVersionId=779378).
This model created by [Kodokuna](https://civitai.com/user/Kodokuna).
|
adamo1139/danube3-4b-aezakmi-toxic-2908-gguf
|
adamo1139
| 2024-08-28T23:37:11Z | 7 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-28T23:25:32Z |
---
license: apache-2.0
---
|
Orenguteng/Llama-3.1-8B-Lexi-Uncensored-GGUF
|
Orenguteng
| 2024-08-28T23:36:29Z | 1,189 | 19 | null |
[
"gguf",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-07-23T23:16:19Z |
---
license: llama3.1
---
LLM Leaderboard 2 results:
---
Lexi suggests that simply uncensoring the LLM makes it smarter. The dataset used to tune this model does not contain any "new knowledge" or any contamination whatsoever, yet, we see the evaluation scores shot up when we get rid of biases and refusals.
Lexi not only retains the original instruct, but it beats it.

NOTE: UGI Leaderboard
The UGI Leaderboard runs the Q4 for its evaluations which results in bad results for this model. As noted, the Q4 has issues retaining the fine tuning for some reasons ends up not as good, which will be fixed for V3.
V2 has been released, I recommend you download the new version:
---
https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2

This model is based on Llama-3.1-8b-Instruct, and is governed by [META LLAMA 3.1 COMMUNITY LICENSE AGREEMENT](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE)
Lexi is uncensored, which makes the model compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones.
You are responsible for any content you create using this model. Please use it responsibly.
Lexi is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license.
IMPORTANT:
---
Use the same template as the official Llama 3.1 8B instruct.
System tokens must be present during inference, even if you set an empty system message. If you are unsure, just add a short system message as you wish.
Feedback:
---
If you find any issues or have suggestions for improvements, feel free to leave a review and I will look into it for upcoming improvements and next version.

|
zhenghenry/gpt-neo
|
zhenghenry
| 2024-08-28T23:36:17Z | 160 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-28T23:36:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mikeyandfriends/PixelWave_FLUX.1-schnell_01
|
mikeyandfriends
| 2024-08-28T23:25:10Z | 5 | 0 | null |
[
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2024-08-28T23:07:27Z |
---
license: apache-2.0
---
|
John6666/77oussam-food-photographie-v10-sdxl
|
John6666
| 2024-08-28T23:19:36Z | 203 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"photographie",
"food",
"foodsy",
"kitchen",
"restaurant",
"en",
"base_model:hsmjpg/77oussam-SDXL",
"base_model:finetune:hsmjpg/77oussam-SDXL",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-08-25T22:04:50Z |
---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- photographie
- food
- foodsy
- kitchen
- restaurant
base_model: hsmjpg/77oussam-SDXL
---
Original model is [here](https://civitai.com/models/685444/77oussam-food-photographie?modelVersionId=767145).
This model created by [77ossam](https://civitai.com/user/77ossam).
|
John6666/77oussam-realistic-v2-v20-sdxl
|
John6666
| 2024-08-28T23:19:09Z | 204 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"photography",
"en",
"base_model:hsmjpg/77oussam-SDXL",
"base_model:finetune:hsmjpg/77oussam-SDXL",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-08-26T12:35:10Z |
---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- photography
base_model: hsmjpg/77oussam-SDXL
---
Original model is [here](https://civitai.com/models/687848/77oussam-realistic-v2?modelVersionId=769864).
This model created by [77ossam](https://civitai.com/user/77ossam).
|
foxcyan/Phi-3.5-mini-instruct-Q5_K_S-GGUF
|
foxcyan
| 2024-08-28T23:08:31Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"multilingual",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:quantized:microsoft/Phi-3.5-mini-instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-08-28T23:08:19Z |
---
base_model: microsoft/Phi-3.5-mini-instruct
language:
- multilingual
library_name: transformers
license: mit
license_link: https://huggingface.co/microsoft/Phi-3.5-mini-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- code
- llama-cpp
- gguf-my-repo
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
# foxcyan/Phi-3.5-mini-instruct-Q5_K_S-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3.5-mini-instruct`](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo foxcyan/Phi-3.5-mini-instruct-Q5_K_S-GGUF --hf-file phi-3.5-mini-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo foxcyan/Phi-3.5-mini-instruct-Q5_K_S-GGUF --hf-file phi-3.5-mini-instruct-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo foxcyan/Phi-3.5-mini-instruct-Q5_K_S-GGUF --hf-file phi-3.5-mini-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo foxcyan/Phi-3.5-mini-instruct-Q5_K_S-GGUF --hf-file phi-3.5-mini-instruct-q5_k_s.gguf -c 2048
```
|
mertgulexe/mistral-ppo
|
mertgulexe
| 2024-08-28T23:04:19Z | 50 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2024-08-23T20:00:39Z |
---
license: apache-2.0
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="gulmert89//tmp/tmphxi1_39z/gulmert89/mistral-ppo")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("gulmert89//tmp/tmphxi1_39z/gulmert89/mistral-ppo")
model = AutoModelForCausalLMWithValueHead.from_pretrained("gulmert89//tmp/tmphxi1_39z/gulmert89/mistral-ppo")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
distily/distily_test_attn_ortho
|
distily
| 2024-08-28T22:36:45Z | 5 | 0 |
Distily
|
[
"Distily",
"tensorboard",
"safetensors",
"gpt2",
"bitnet",
"1.58b",
"generated_from_trainer",
"dataset:wikimedia/wikipedia",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"region:us"
] | null | 2024-08-24T18:04:22Z |
---
base_model: gpt2
datasets:
- wikimedia/wikipedia
library_name: Distily
license: mit
tags:
- bitnet
- 1.58b
- generated_from_trainer
model-index:
- name: distily_test_attn_ortho
results: []
---
# Summary
Distilled with [Distily](https://github.com/lapp0/distily) library
using teacher model [gpt2](https://huggingface.co/gpt2)
on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia).
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment.
# Model description
More information needed
# Intended uses & limitations
More information needed
-->
# Model Architecture:
- **Architecture**: `GPT2LMHeadModel`
- **Total Parameters**: 124,439,808
- **Data Type (dtype)**: torch.bfloat16
- **Model Size**: 0.24 GB
# Benchmark Metrics Comparison
| Metric | |
| :--- |
# Resource Usage Comparison
- VRAM Use: 7.7872 GB
# Distillation (Teacher -> Student) Architecture Difference:
- **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel`
- **Total Parameters**: 124,439,808 -> 124,439,808
- **Data Type (dtype)**: torch.bfloat16 -> torch.bfloat16
- **Model Size**: 0.24 GB -> 0.24 GB
<details>
<summary>Module Diff Details</summary>
```diff
```
</details>
<br/>
# Train Dataset
Trained on 145,744,973 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
- Num Samples: `247,500`
- Subset: `20231101.en`
- Split: `train`
# Training Objective
```
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=cos, layer_mapper=layer-2, projector=orthogonal))
```
# Hyperparameters
The following hyperparameters were used during training:
<details>
<summary>Expand</summary>
- learning_rate: `0.0001`
- train_batch_size: `4`
- eval_batch_size: `8`
- seed: `42`
- optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
- lr_scheduler_type: `cosine_with_min_lr`
- lr_scheduler_warmup_ratio: `0.5`
- num_epochs: `1.0`
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=cos, layer_mapper=layer-2, projector=orthogonal))`
- train_embeddings: `True`
- lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x781ebe1e3c10>`
- student_model_name_or_path: `None`
- student_config_name_or_path: `None`
- student_model_config: `None`
- reinitialize_weights: `None`
- copy_teacher_modules: `[('lm_head', False)]`
- student_model_as_bitnet: `True`
- dropout: `None`
- teacher_model_name_or_path: `gpt2`
- teacher_load_in_8bit: `False`
- teacher_load_in_4bit: `False`
- dataset_uri: `wikimedia/wikipedia`
- dataset_subset: `20231101.en`
- dataset_split: `train`
- dataset_column_name: `text`
- dataset_sample_size: `250000`
- dataset_test_size: `0.01`
- gradient_accumulation_steps: `1`
- weight_decay: `0.0`
- max_grad_norm: `1.0`
- warmup_ratio: `0.5`
- warmup_steps: `0`
- gradient_checkpointing: `True`
</details>
<br/>
# Framework Versions
- Distily 0.3.0
- Transformers 4.44.2
- Pytorch 2.3.0
- Datasets 2.21.0
|
sfulay/zephyr-7b-dpo-full-prometheus-reward-scale-05
|
sfulay
| 2024-08-28T22:33:52Z | 9 | 0 | null |
[
"safetensors",
"mistral",
"trl",
"dpo",
"generated_from_trainer",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"base_model:finetune:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"region:us"
] | null | 2024-08-21T00:14:06Z |
---
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: zephyr-7b-dpo-full-prometheus-reward-scale-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-dpo-full-prometheus-reward-scale-05
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5286
- Rewards/chosen: -1.4143
- Rewards/rejected: -2.7417
- Rewards/accuracies: 0.7629
- Rewards/margins: 1.3275
- Logps/rejected: -493.2510
- Logps/chosen: -417.0316
- Logits/rejected: 1.9856
- Logits/chosen: 0.4911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 55
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6696 | 0.1143 | 50 | 0.6584 | -0.0084 | -0.1643 | 0.6853 | 0.1559 | -235.5072 | -276.4426 | -2.4382 | -2.5406 |
| 0.6122 | 0.2286 | 100 | 0.6111 | -0.4070 | -0.8953 | 0.6767 | 0.4883 | -308.6058 | -316.3019 | -2.5533 | -2.6512 |
| 0.5476 | 0.3429 | 150 | 0.5583 | -1.3343 | -2.3426 | 0.7371 | 1.0083 | -453.3369 | -409.0355 | 0.9770 | 0.1441 |
| 0.5582 | 0.4571 | 200 | 0.5499 | -1.0345 | -2.1424 | 0.7328 | 1.1079 | -433.3173 | -379.0511 | 0.5624 | -0.4976 |
| 0.5503 | 0.5714 | 250 | 0.5393 | -1.1701 | -2.3108 | 0.7371 | 1.1406 | -450.1522 | -392.6152 | 0.7719 | -0.3725 |
| 0.5224 | 0.6857 | 300 | 0.5312 | -1.2228 | -2.5102 | 0.7543 | 1.2874 | -470.0949 | -397.8840 | 1.7088 | 0.1892 |
| 0.5396 | 0.8 | 350 | 0.5290 | -1.4462 | -2.7485 | 0.75 | 1.3024 | -493.9275 | -420.2202 | 1.9215 | 0.4365 |
| 0.55 | 0.9143 | 400 | 0.5286 | -1.4143 | -2.7417 | 0.7629 | 1.3275 | -493.2510 | -417.0316 | 1.9856 | 0.4911 |
### Framework versions
- Transformers 4.44.0.dev0
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1
|
cartesia-ai/Rene-v0.1-1.3b-pytorch
|
cartesia-ai
| 2024-08-28T22:27:21Z | 400 | 54 |
cartesia_pytorch
|
[
"cartesia_pytorch",
"safetensors",
"rene",
"mamba",
"cartesia",
"en",
"dataset:allenai/dolma",
"arxiv:2405.21060",
"license:apache-2.0",
"region:us"
] | null | 2024-08-25T07:24:21Z |
---
license: apache-2.0
language:
- en
datasets:
- allenai/dolma
tags:
- rene
- mamba
- cartesia
library_name: cartesia_pytorch
---
# Model Card for Rene
Rene is a 1.3 billion-parameter language model trained by [Cartesia](https://cartesia.ai).
Rene has a hybrid architecture based on [Mamba-2](https://arxiv.org/abs/2405.21060), with feedforward and sliding window attention layers interspersed.
It uses the [allenai/OLMo-1B-hf](https://huggingface.co/allenai/OLMo-1B-hf) tokenizer.
Rene was pretrained on 1.5 trillion tokens of the [Dolma-1.7](https://huggingface.co/datasets/allenai/dolma) dataset.
For more details, see our [blog post](https://cartesia.ai/blog/on-device).
## Usage
This is the PyTorch version of the model, and it's intended to run on CUDA devices. For use on Mac computers, please use [the native MLX version](https://huggingface.co/cartesia-ai/Rene-v0.1-1.3b-4bit-mlx) instead.
### Installation
The Rene model depends on the `cartesia-pytorch` package, which can be installed with `pip` as follows:
```shell
pip install --no-binary :all: cartesia-pytorch
```
### Generation example
```python
from cartesia_pytorch import ReneLMHeadModel
from transformers import AutoTokenizer
model = ReneLMHeadModel.from_pretrained("cartesia-ai/Rene-v0.1-1.3b-pytorch").half().cuda()
tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-1B-hf")
in_message = ["Rene Descartes was"]
inputs = tokenizer(in_message, return_tensors="pt")
outputs = model.generate(inputs.input_ids.cuda(), max_length=50, top_k=100, top_p=0.99)
out_message = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
print(out_message)
# Example output: "Rene Descartes was a French mathematician, philosopher, and scientist. Descartes is famously credited for creating the Cartesian coordinate system: a 3 dimensional representation of points, vectors, and directions. This work is, for the most part" ...
```
### Evaluation example
You can use our `cartesia_lm_eval` wrapper around the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/main) to evaluate our model on standard text benchmarks. Example command (clone this repo and run the below from within the `cartesia-pytorch` directory):
```shell
python -m evals.cartesia_lm_eval --model rene_ssm --model_args pretrained=cartesia-ai/Rene-v0.1-1.3b-pytorch,trust_remote_code=True --trust_remote_code --tasks copa,hellaswag,piqa,arc_easy,arc_challenge,winogrande,openbookqa --cache_requests true --batch_size auto:4 --output_path outputs/rene_evals/
```
## Results on common benchmarks
| Model | Params (B) | Train Tokens | COPA | HellaSwag | MMLU (5-shot) | PIQA | ARC-e | ARC-c | WinoGrande | OpenBookQA | Average |
|------------------------------------------------|------------|--------------|------|-----------|---------------|------|-------|-------|------------|------------|---------|
| allenai/OLMo-1B-hf | 1.2 | 3.0 | 82.0 | 62.9 | 26.2 | 75.1 | 57.4 | 31.1 | 60.0 | 36.2 | 53.9 |
| apple/OpenELM-1\_1B | 1.1 | 1.5 | 81.0 | 64.8 | 27.1 | 75.6 | 55.4 | 32.3 | 61.9 | 36.2 | 54.3 |
| state-spaces/mamba2-1.3b | 1.3 | 0.3 | 82.0 | 60.0 | 25.8 | 73.7 | 64.2 | 33.3 | 61.0 | 37.8 | 54.7 |
| microsoft/phi-1\_5 | 1.4 | 0.15 | 79.0 | 62.6 | 42.5 | 75.5 | 73.2 | 48.0 | 72.8 | 48.0 | 62.7 |
| Qwen/Qwen2-1.5B | 1.5 | 7.0 | 80.0 | 65.4 | 56.0 | 75.5 | 60.4 | 35.0 | 65.8 | 36.4 | 59.3 |
| RWKV/rwkv-6-world-1b6 | 1.6 | 1.1 | 84.0 | 58.3 | 25.9 | 73.5 | 56.7 | 34.1 | 60.0 | 37.4 | 53.7 |
| stabilityai/stablelm-2-1\_6b | 1.6 | 4.0 | 86.0 | 69.0 | 38.1 | 76.7 | 68.1 | 38.9 | 63.6 | 38.8 | 59.9 |
| HuggingFaceTB/SmolLM-1.7B | 1.7 | 1.0 | 76.0 | 65.8 | 29.9 | 76.1 | 73.5 | 46.4 | 60.9 | 42.0 | 58.8 |
| h2oai/h2o-danube2-1.8b-base | 1.8 | 3.0 | 82.0 | 72.4 | 39.9 | 77.3 | 69.0 | 39.9 | 63.9 | 41.4 | 60.7 |
| google/recurrentgemma-2b | 2.7 | 2.0 | 62.0 | 61.8 | 32.3 | 68.8 | 46.4 | 29.9 | 57.1 | 29.0 | 48.4 |
| cognitivecomputations/TinyDolphin-2.8.1-1.1b | 1.1 | | 71.0 | 59.9 | 25.7 | 73.1 | 55.8 | 33.0 | 59.7 | 36.6 | 51.9 |
| cartesia-ai/Rene-v0.1-1.3b-pytorch (OUR MODEL) | 1.3 | 1.5 | 82.0 | 69.4 | 32.6 | 77.5 | 61.7 | 34.4 | 62.9 | 39.2 | 57.5 |
## Bias, Risks, and Limitations
Rene is a pretrained base model which has not undergone any alignment or instruction tuning, and therefore does not have any moderation or safety guarantees. Users should implement appropriate guardrails and moderation mechanisms based on their particular needs in order to ensure responsible and ethical usage.
## About Cartesia
At [Cartesia](https://cartesia.ai/), we're building real-time multimodal intelligence for every device.
|
Doramy/llama-3-8b-Instruct-bnb-4bit-doramy-demo
|
Doramy
| 2024-08-28T22:23:48Z | 7 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-28T22:12:13Z |
---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** Doramy
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-gguf
|
RichardErkhov
| 2024-08-28T22:16:22Z | 8 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-08-28T09:17:30Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity - GGUF
- Model creator: https://huggingface.co/brucethemoose/
- Original model: https://huggingface.co/brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q2_K.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q2_K.gguf) | Q2_K | 11.94GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.IQ3_XS.gguf) | IQ3_XS | 13.26GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.IQ3_S.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.IQ3_S.gguf) | IQ3_S | 13.99GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q3_K_S.gguf) | Q3_K_S | 13.93GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.IQ3_M.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.IQ3_M.gguf) | IQ3_M | 14.5GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q3_K.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q3_K.gguf) | Q3_K | 15.51GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q3_K_M.gguf) | Q3_K_M | 15.51GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q3_K_L.gguf) | Q3_K_L | 16.89GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.IQ4_XS.gguf) | IQ4_XS | 17.36GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q4_0.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q4_0.gguf) | Q4_0 | 18.13GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.IQ4_NL.gguf) | IQ4_NL | 18.3GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q4_K_S.gguf) | Q4_K_S | 18.25GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q4_K.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q4_K.gguf) | Q4_K | 19.24GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q4_K_M.gguf) | Q4_K_M | 19.24GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q4_1.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q4_1.gguf) | Q4_1 | 20.1GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q5_0.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q5_0.gguf) | Q5_0 | 22.08GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q5_K_S.gguf) | Q5_K_S | 22.08GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q5_K.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q5_K.gguf) | Q5_K | 22.65GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q5_K_M.gguf) | Q5_K_M | 22.65GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q5_1.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q5_1.gguf) | Q5_1 | 24.05GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q6_K.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q6_K.gguf) | Q6_K | 26.28GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q8_0.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity.Q8_0.gguf) | Q8_0 | 34.03GB |
Original model description:
---
language:
- en
license: other
library_name: transformers
tags:
- text-generation-inference
- merge
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
pipeline_tag: text-generation
model-index:
- name: CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.41
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.77
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.44
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.84
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.33
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity
name: Open LLM Leaderboard
---
### Possibly obsolete, replaced by https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-merge-v5
Old model description below:
***
**Dolphin-2.2-yi-34b-200k**, **Nous-Capybara-34B**, **Tess-M-v1.4**, **Airoboros-3_1-yi-34b-200k**, **PlatYi-34B-200K-Q**, and **Una-xaberius-34b-v1beta** merged with a new, experimental implementation of "dare ties" via mergekit. See:
> [Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch](https://github.com/yule-BUAA/MergeLM)
> https://github.com/cg123/mergekit/tree/dare
This variant is merged with a "higher than recommended" density with with the following config, and the tokenizer from chargoddard's Yi-Llama:
```
models:
- model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
# no parameters necessary for base model
- model: /home/alpha/Storage/Models/Raw/migtissera_Tess-34B-v1.4
parameters:
weight: 0.19
density: 0.6
- model: /home/alpha//Storage/Models/Raw/bhenrym14_airoboros-3_1-yi-34b-200k
parameters:
weight: 0.14
density: 0.5
- model: /home/alpha/Storage/Models/Raw/Nous-Capybara-34B
parameters:
weight: 0.19
density: 0.6
- model: /home/alpha/Storage/Models/Raw/kyujinpy_PlatYi-34B-200K-Q
parameters:
weight: 0.14
density: 0.5
- model: /home/alpha/FastModels/ehartford_dolphin-2.2-yi-34b-200k
parameters:
weight: 0.19
density: 0.6
- model: /home/alpha/FastModels/fblgit_una-xaberius-34b-v1beta
parameters:
weight: 0.15
density: 0.08
merge_method: dare_ties
base_model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
parameters:
int8_mask: true
dtype: bfloat16
```
***
## Prompt template: Orca-Vicuna?
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
It might recognize ChatML from Dolphin+Xaberius, and Llama-chat from Airoboros.
Sometimes the model "spells out" the stop token as `</s>` like Capybara, so you may need to add `</s>` as an additional stopping condition.
***
## Running
Being a Yi model, try disabling the BOS token and/or running a lower temperature with 0.05-0.13 MinP, a little repitition penalty, and no other samplers. Yi tends to run "hot" by default.
24GB GPUs can run Yi-34B-200K models at **45K-75K context** with exllamav2. I go into more detail in this [post](https://old.reddit.com/r/LocalLLaMA/comments/1896igc/how_i_run_34b_models_at_75k_context_on_24gb_fast/)
I recommend exl2 quantizations profiled on data similar to the desired task. It is especially sensitive to the quantization data at low bpw! I published my own quantizations on vicuuna chat + fiction writing here: [4bpw](https://huggingface.co/brucethemoose/CaPlatTessDolXaBoros-34B-200K-exl2-4bpw-fiction) [3.1bpw](https://huggingface.co/brucethemoose/CaPlatTessDolXaBoros-34B-200K-exl2-4bpw-fiction)
To load this in full-context backends like transformers and vllm, you *must* change `max_position_embeddings` in config.json to a lower value than 200,000, otherwise you will OOM!
***
## Testing Notes
Various densities were tested with perplexity tests and long context prompts. Relatively high densities seem to perform better, contrary to the findings of the Super Mario paper.
This particular version is merged with more than the "recommended" max density of 0.5. It seems to result in even better perplexity, and a much higher position on the hf leaderboard, but I'm not sure if this translates to better output.
Weights that add up to 1 seems to be optimal.
Dare Ties is also resulting in seemingly better, lower perplexity merges than a regular ties merge, task arithmetic or a slerp merge.
Xaberuis is not a 200K model, hence it was merged at a very low density to try and preserve Yi 200K's long context performance while still inheriting some of Xaberius's performance.
I chose not to include other finetunes because they aren't trained on the 200K base. If any other 200K finetunes pop up, let me know.
***
## Credits:
https://github.com/cg123/mergekit/tree/dare
https://huggingface.co/ehartford/dolphin-2.2-yi-34b-200k
https://huggingface.co/kyujinpy/PlatYi-34B-200K-Q
https://huggingface.co/NousResearch/Nous-Capybara-34B/
https://huggingface.co/bhenrym14/airoboros-3_1-yi-34b-200k
https://huggingface.co/migtissera/Tess-M-v1.4
https://huggingface.co/fblgit/una-xaberius-34b-v1beta
https://huggingface.co/chargoddard/Yi-34B-200K-Llama
https://huggingface.co/01-ai/Yi-34B-200K
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_brucethemoose__CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity)
| Metric |Value|
|---------------------------------|----:|
|Avg. |72.15|
|AI2 Reasoning Challenge (25-Shot)|67.41|
|HellaSwag (10-Shot) |85.77|
|MMLU (5-Shot) |77.44|
|TruthfulQA (0-shot) |57.84|
|Winogrande (5-shot) |83.11|
|GSM8k (5-shot) |61.33|
|
bisoye/wav2vec2-base_lr_2e-4_20_epochs_no_eval
|
bisoye
| 2024-08-28T22:05:29Z | 148 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-08-28T21:23:00Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base_lr_2e-4_20_epochs_no_eval
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base_lr_2e-4_20_epochs_no_eval
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
ricardoSLabs/Fraunhofer_Classical_binary_unbalaced
|
ricardoSLabs
| 2024-08-28T21:52:45Z | 6 | 0 | null |
[
"tensorboard",
"safetensors",
"beit",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224-pt22k-ft22k",
"base_model:finetune:microsoft/beit-base-patch16-224-pt22k-ft22k",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-08-28T20:39:27Z |
---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224-pt22k-ft22k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: Fraunhofer_Classical_binary_unbalaced
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9925
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fraunhofer_Classical_binary_unbalaced
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0206
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0583 | 1.0 | 146 | 0.0540 | 0.9784 |
| 0.04 | 2.0 | 292 | 0.0524 | 0.9794 |
| 0.023 | 3.0 | 438 | 0.0346 | 0.9891 |
| 0.0181 | 4.0 | 584 | 0.0260 | 0.9911 |
| 0.0193 | 5.0 | 730 | 0.0206 | 0.9925 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
sam2ai/odia-paligemma-2b-9900-v1.1
|
sam2ai
| 2024-08-28T21:49:39Z | 6 | 0 | null |
[
"safetensors",
"paligemma",
"llama-factory",
"full",
"generated_from_trainer",
"base_model:google/paligemma-3b-mix-448",
"base_model:finetune:google/paligemma-3b-mix-448",
"license:other",
"region:us"
] | null | 2024-08-28T21:42:08Z |
---
license: other
base_model: google/paligemma-3b-mix-448
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: paligemma-3b-odia-chat-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/sambitsekharai/huggingface/runs/c1usfjv9)
# paligemma-3b-odia-chat-final
This model is a fine-tuned version of [google/paligemma-3b-mix-448](https://huggingface.co/google/paligemma-3b-mix-448) on the odia_llava_data dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.43.2
- Pytorch 2.1.2+git70dfd51
- Datasets 2.19.1
- Tokenizers 0.19.1
|
iqrakiran/my-sentence-transformer-model
|
iqrakiran
| 2024-08-28T21:44:22Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:4093",
"loss:TripletLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-08-28T21:44:01Z |
---
base_model: distilbert/distilroberta-base
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:4093
- loss:TripletLoss
widget:
- source_sentence: I am angry about an incident at the supermarket, and it has been
bothering me for the last three or four days. The person involved was flirting
with me, and I think they should apologize. I didn't ask for an apology or indicate
that I was upset during the conversation because I felt it would be inappropriate.
I believe they should call me and set aside an appropriate time to apologize.
sentences:
- If you are a counsellor, please answer the questions based on the description
of the patient.
- It seems like this incident really impacted you and you believe that an apology
is necessary for closure. It's understandable that you didn't address it on the
spot if you felt it was inappropriate. However, people sometimes don't realize
they've upset someone unless it's brought to their attention. Since you feel so
strongly about this, have you considered reaching out to the person to express
your feelings and provide them an opportunity to apologize?
- It sounds like you have built a strong support system in San Diego with meaningful
relationships and emotional investments. It's natural to feel apprehensive about
moving away from that. Maybe you could consider the potential opportunities that
LA has to offer and weigh them against what you would be leaving behind in San
Diego. Additionally, think about the ways you can stay connected with your loved
ones after moving. Just remember that it's a difficult decision, and it's important
to give yourself time to make the best choice for you.
- source_sentence: Lately, I've been feeling a bit overwhelmed with everything going
on in my life. I've been hearing things that aren't really there, like whispers
and someone calling my name, but it's not happening all the time, just occasionally.
I also find myself getting a bit disorganized, but I wouldn't say I'm completely
disorganized. Sometimes, I just feel disconnected with what is happening around
me. I've been like this for a couple of weeks now. My parents said that I've been
acting differently, but I don't know, maybe I'm just overreacting? Can you help
me with this?
sentences:
- If you are a counsellor, please answer the questions based on the description
of the patient.
- Thank you for sharing your experience with me. I can understand how hearing things
and feeling disorganized might be confusing and concerning for you. It's important
to discuss these symptoms and feelings in order to provide proper support. Can
you tell me if you've experienced any changes in your mood, daily activities,
or sleep habits during this time? Also, have you noticed any patterns or triggers
for when you hear the whispers or feel disconnected from what's happening around
you?
- I appreciate you sharing your feelings and experiences with me. It sounds like
you're going through a challenging period in your life, dealing with stress and
worry. It's important to understand the factors contributing to your stress and
how they might be influencing your thoughts and emotions. Could you provide some
more specific examples of the situations that have caused you to feel stressed
and worried? Additionally, have you noticed any patterns or triggers that seem
to make your worries more intense?
- source_sentence: I've been experiencing anxiety due to the difficult transition
from being at home, taking care of my son, to moving back into the work world.
I've been trying to manage this anxiety by paying attention to nutrition, exercise,
and practicing gratitude. Although I feel like I'm managing better now than when
I first started, I'm still struggling.
sentences:
- It's great to hear that you've made some progress in managing your anxiety through
healthy practices like focusing on nutrition, exercise, and gratitude. It's common
for individuals to face challenges while transitioning between roles, and it might
take more time for you to adjust fully. If you still find yourself struggling,
consider seeking additional support through therapy or support groups. Is there
a specific aspect of this transition that continues to cause you worry?
- If you are a counsellor, please answer the questions based on the description
of the patient.
- It's great to hear that you have a strong drive for self-improvement and setting
ambitious goals for yourself. Acknowledging this inner motivation is an essential
step towards achieving what you desire. Remember, it's important to embrace the
process of learning and growing, and not to put too much pressure on yourself
to reach an end goal quickly. Take it one step at a time, and don't forget to
celebrate your milestones along the way.
- source_sentence: Recently, I have been experiencing difficulties while learning.
Sometimes it's quite challenging, and I feel upset with myself to the point of
crying. However, at other times, I find it easy and feel a sense of accomplishment
when I finally get things right.
sentences:
- If you are a counsellor, please answer the questions based on the description
of the patient.
- Thank you for sharing that with me. I'm glad you're feeling better now that you're
here. Sometimes waiting can cause anxiety or stress, so it's important to find
ways to manage these feelings. If you'd like, we can discuss some coping strategies
for dealing with stress while waiting or in similar situations.
- It seems like you've been having ups and downs with your learning experience.
It's great that you sometimes feel accomplished when you finally understand something.
When you're feeling upset or frustrated, it might be helpful to take a break and
come back to it later, as you've mentioned. This can give you a fresh perspective
and may help reduce stress. It's also important to remember that learning is a
process, and it's okay to experience difficulties along the way.
- source_sentence: I remember generating alternative predictions. We said that I might
get nervous but if I stay focused on what I want to do, I can probably do it.
It's just hard to believe when I'm in this situation.
sentences:
- It seems like you've been dealing with unexpected expenses and some impulsive
spending habits. By planning for a stricter budget and understanding the difference
between essential and non-essential expenses, you can start to develop healthy
financial habits. Have you considered creating a budget plan and setting aside
a specific amount for unexpected expenses to help manage your spending?
- It's normal to feel nervous in difficult situations. As a counsellor, I suggest
continuing to test these alternative predictions directly in real-life experiences.
As you encounter situations where you manage to handle things even while nervous,
your belief in your abilities and the predictions will strengthen. Remember to
stay focused and trust yourself.
- If you are a counsellor, please answer the questions based on the description
of the patient.
---
# SentenceTransformer based on distilbert/distilroberta-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [distilbert/distilroberta-base](https://huggingface.co/distilbert/distilroberta-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [distilbert/distilroberta-base](https://huggingface.co/distilbert/distilroberta-base) <!-- at revision fb53ab8802853c8e4fbdbcd0529f21fc6f459b2b -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("iqrakiran/my-sentence-transformer-model")
# Run inference
sentences = [
"I remember generating alternative predictions. We said that I might get nervous but if I stay focused on what I want to do, I can probably do it. It's just hard to believe when I'm in this situation.",
"It's normal to feel nervous in difficult situations. As a counsellor, I suggest continuing to test these alternative predictions directly in real-life experiences. As you encounter situations where you manage to handle things even while nervous, your belief in your abilities and the predictions will strengthen. Remember to stay focused and trust yourself.",
'If you are a counsellor, please answer the questions based on the description of the patient.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 4,093 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | sentence_2 |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 11 tokens</li><li>mean: 57.82 tokens</li><li>max: 197 tokens</li></ul> | <ul><li>min: 30 tokens</li><li>mean: 76.95 tokens</li><li>max: 149 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 22.0 tokens</li><li>max: 22 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 | sentence_2 |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------|
| <code>Lately, I've been feeling really stressed out and I keep having this thought that my partner might be cheating on me. It started when I noticed changes in their behavior a couple of months ago. At times, I feel like my partner acts distant, but when I confront them about it, they assure me that everything is fine between us. I feel like maybe it's not a strong enough thought to be classified as a delusion, but it's really bothering me and affecting our relationship.</code> | <code>Thank you for sharing your concerns and feelings with me. It's important to know that it's normal for people to experience doubts and insecurities in relationships sometimes. I understand that feeling stressed and having negative thoughts is difficult to deal with. If you're open to it, perhaps we could explore these thoughts and feelings further to better understand their origin and find ways to cope with them. Additionally, involving your partner in a constructive conversation about your feelings and concerns might help to improve your relationship and provide reassurance.</code> | <code>If you are a counsellor, please answer the questions based on the description of the patient.</code> |
| <code>I think counting would mean that somebody cares about me and I matter to them. In my life, for example, I think Ms. Jones is upset that I am not as involved or doing the things that she saw me do last year. So I guess I matter to her.</code> | <code>It's important to recognize that feeling valued and cared for by others can contribute to our overall well-being. It's great that you have someone like Ms. Jones in your life who notices your changes and seems to care about you. Can you tell me more about your relationship with Ms. Jones or the changes that have occurred in the past year?</code> | <code>If you are a counsellor, please answer the questions based on the description of the patient.</code> |
| <code>Lately, I've been feeling quite down, and it's been affecting my daily life. I've also been having a hard time concentrating and making decisions, and often find myself lost in thought. On top of that, I've started hearing voices that aren't there, but it only happens occasionally. It all started when I moved to a new city a few months ago. I feel so isolated, and the stress of adjusting to a new environment might be causing these issues.</code> | <code>Thank you for sharing your feelings and experiences with me. It sounds like you are struggling with some challenging emotions and experiences since your move to a new city. Adjusting to a new environment can be a significant source of stress. It's essential to acknowledge what you're going through and understand that seeking help is healthy. Can you provide further details about the voices you've been hearing? How often do they occur, and how exactly do they affect your daily life? Also, can you share your previous mental health history, if any? Your additional information will help me get a better understanding of your situation and provide the best support possible.</code> | <code>If you are a counsellor, please answer the questions based on the description of the patient.</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 1.9531 | 500 | 0.2321 |
| 3.9062 | 1000 | 0.0 |
| 5.8594 | 1500 | 0.0001 |
| 7.8125 | 2000 | 0.0 |
| 9.7656 | 2500 | 0.0 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.44.2
- PyTorch: 2.4.0+cu121
- Accelerate: 0.33.0
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-gguf
|
RichardErkhov
| 2024-08-28T21:26:34Z | 7 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-08-28T08:02:56Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties - GGUF
- Model creator: https://huggingface.co/brucethemoose/
- Original model: https://huggingface.co/brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q2_K.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q2_K.gguf) | Q2_K | 11.94GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.IQ3_XS.gguf) | IQ3_XS | 13.26GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.IQ3_S.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.IQ3_S.gguf) | IQ3_S | 13.99GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q3_K_S.gguf) | Q3_K_S | 13.93GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.IQ3_M.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.IQ3_M.gguf) | IQ3_M | 14.5GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q3_K.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q3_K.gguf) | Q3_K | 15.51GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q3_K_M.gguf) | Q3_K_M | 15.51GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q3_K_L.gguf) | Q3_K_L | 16.89GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.IQ4_XS.gguf) | IQ4_XS | 17.36GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q4_0.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q4_0.gguf) | Q4_0 | 18.13GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.IQ4_NL.gguf) | IQ4_NL | 18.3GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q4_K_S.gguf) | Q4_K_S | 18.25GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q4_K.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q4_K.gguf) | Q4_K | 19.24GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q4_K_M.gguf) | Q4_K_M | 19.24GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q4_1.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q4_1.gguf) | Q4_1 | 20.1GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q5_0.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q5_0.gguf) | Q5_0 | 22.08GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q5_K_S.gguf) | Q5_K_S | 22.08GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q5_K.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q5_K.gguf) | Q5_K | 22.65GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q5_K_M.gguf) | Q5_K_M | 22.65GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q5_1.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q5_1.gguf) | Q5_1 | 24.05GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q6_K.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q6_K.gguf) | Q6_K | 26.28GB |
| [CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q8_0.gguf](https://huggingface.co/RichardErkhov/brucethemoose_-_CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-gguf/blob/main/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties.Q8_0.gguf) | Q8_0 | 34.03GB |
Original model description:
---
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- text-generation-inference
- merge
---
A low density DARE ties merge, for benchmarking on the open llm leaderboard.
**You probably shouldn't use this model. Use this higher density merge instead, which is scoring much better on the llm leaderboard and perplexity tests:** https://huggingface.co/brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity
mergekit config:
```
models:
- model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
# no parameters necessary for base model
- model: /home/alpha/Storage/Models/Raw/migtissera_Tess-34B-v1.4
parameters:
weight: 0.19
density: 0.44
- model: /home/alpha//Storage/Models/Raw/bhenrym14_airoboros-3_1-yi-34b-200k
parameters:
weight: 0.14
density: 0.34
- model: /home/alpha/Storage/Models/Raw/Nous-Capybara-34B
parameters:
weight: 0.19
density: 0.44
- model: /home/alpha/Storage/Models/Raw/kyujinpy_PlatYi-34B-200K-Q
parameters:
weight: 0.14
density: 0.34
- model: /home/alpha/FastModels/ehartford_dolphin-2.2-yi-34b-200k
parameters:
weight: 0.19
density: 0.44
- model: /home/alpha/FastModels/fblgit_una-xaberius-34b-v1beta
parameters:
weight: 0.15
density: 0.08
merge_method: dare_ties
base_model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
parameters:
int8_mask: true
dtype: bfloat16
```
|
Multiperspective/roberta-llm-noninstruct
|
Multiperspective
| 2024-08-28T21:16:01Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-08-28T21:14:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xoyeop/deberta-base-HSOL-WIKI-CLS
|
xoyeop
| 2024-08-28T21:14:31Z | 6 | 0 | null |
[
"tensorboard",
"safetensors",
"deberta",
"generated_from_trainer",
"base_model:microsoft/deberta-base",
"base_model:finetune:microsoft/deberta-base",
"license:mit",
"region:us"
] | null | 2024-08-28T20:07:14Z |
---
license: mit
base_model: microsoft/deberta-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: deberta-base-HSOL-WIKI-CLS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-HSOL-WIKI-CLS
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1529
- Precision: 0.7757
- Recall: 0.7782
- F1: 0.7769
- Accuracy: 0.8075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.6211 | 1.0 | 769 | 0.7439 | 0.8403 | 0.6654 | 0.6824 | 0.7854 |
| 0.5518 | 2.0 | 1538 | 0.4591 | 0.7945 | 0.7469 | 0.7629 | 0.8114 |
| 0.4051 | 3.0 | 2307 | 0.7194 | 0.7718 | 0.7674 | 0.7695 | 0.8036 |
| 0.2264 | 4.0 | 3076 | 0.9925 | 0.7918 | 0.7546 | 0.7682 | 0.8127 |
| 0.166 | 5.0 | 3845 | 1.1529 | 0.7757 | 0.7782 | 0.7769 | 0.8075 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
bisoye/wav2vec2-base_lr_3e-4_20_epochs_no_eval
|
bisoye
| 2024-08-28T21:00:06Z | 161 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-08-28T20:23:38Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base_lr_3e-4_20_epochs_no_eval
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base_lr_3e-4_20_epochs_no_eval
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
AlexVan2000/autotrain-t5-large-gpt4o
|
AlexVan2000
| 2024-08-28T20:51:59Z | 5 | 0 | null |
[
"tensorboard",
"safetensors",
"t5",
"autotrain",
"text2text-generation",
"base_model:google-t5/t5-large",
"base_model:finetune:google-t5/t5-large",
"region:us"
] |
text2text-generation
| 2024-08-28T20:46:54Z |
---
tags:
- autotrain
- text2text-generation
base_model: google-t5/t5-large
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Seq2Seq
## Validation Metrics
loss: nan
rouge1: 13.0563
rouge2: 5.3373
rougeL: 10.9311
rougeLsum: 12.1825
gen_len: 19.0
runtime: 6.2212
samples_per_second: 1.607
steps_per_second: 0.482
: 3.0
|
Multiperspective/bert-llm-noninstruct
|
Multiperspective
| 2024-08-28T20:49:08Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-08-28T20:48:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nutanix/Mistral-7B-Instruct-v0.3_cpp_unit_tests_full_finetuning_class_level
|
Nutanix
| 2024-08-28T20:45:51Z | 35 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-28T20:42:26Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Dissoloquele-Bengui/marian-finetuned-kde4-dyu-to-fr
|
Dissoloquele-Bengui
| 2024-08-28T20:44:37Z | 111 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-08-27T20:37:59Z |
---
library_name: transformers
tags:
- translation
- generated_from_trainer
model-index:
- name: marian-finetuned-kde4-dyu-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-dyu-to-fr
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.44.1
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
yezey/llama3.1-8B-vs-unsloth
|
yezey
| 2024-08-28T20:23:40Z | 77 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-08-28T19:47:08Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Houcemeddine/bert-finetuned-cola
|
Houcemeddine
| 2024-08-28T20:07:54Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-08-28T17:22:08Z |
---
base_model: bert-base-cased
license: apache-2.0
metrics:
- matthews_correlation
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-cola
results: []
library_name: transformers
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7732
- Matthews Correlation: 0.6133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4581 | 1.0 | 1069 | 0.4458 | 0.5233 |
| 0.3257 | 2.0 | 2138 | 0.5767 | 0.5911 |
| 0.1987 | 3.0 | 3207 | 0.7732 | 0.6133 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Sakib323/granite-3b-code-base-quantamphysics
|
Sakib323
| 2024-08-28T19:58:30Z | 76 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-08-28T19:57:12Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CodingQueen13/speecht5_finetuned_voxpopuli_sk
|
CodingQueen13
| 2024-08-28T19:25:42Z | 15 | 0 | null |
[
"tensorboard",
"safetensors",
"speecht5",
"generated_from_trainer",
"text-to-speech",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"region:us"
] |
text-to-speech
| 2024-08-27T20:19:14Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
- text-to-speech
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4361
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.4903 | 10.2828 | 1000 | 0.4498 |
| 0.4637 | 20.5656 | 2000 | 0.4383 |
| 0.4591 | 30.8483 | 3000 | 0.4364 |
| 0.4621 | 41.1311 | 4000 | 0.4361 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
bisoye/wav2vec2-base_lr_3e-4
|
bisoye
| 2024-08-28T19:18:30Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-08-28T16:01:13Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base_lr_3e-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base_lr_3e-4
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0682
- Accuracy: 0.9784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 1.7893 | 0.9851 | 33 | 1.5529 | 0.4602 |
| 0.9637 | 2.0 | 67 | 0.8562 | 0.7563 |
| 0.5758 | 2.9851 | 100 | 0.4980 | 0.8276 |
| 0.5401 | 4.0 | 134 | 0.3442 | 0.8875 |
| 0.3908 | 4.9851 | 167 | 0.4630 | 0.8322 |
| 0.348 | 6.0 | 201 | 0.2102 | 0.9260 |
| 0.309 | 6.9851 | 234 | 0.1996 | 0.9391 |
| 0.305 | 8.0 | 268 | 0.3001 | 0.9185 |
| 0.2311 | 8.9851 | 301 | 0.2150 | 0.9335 |
| 0.2362 | 10.0 | 335 | 0.1218 | 0.9550 |
| 0.1929 | 10.9851 | 368 | 0.1334 | 0.9550 |
| 0.1781 | 12.0 | 402 | 0.1077 | 0.9597 |
| 0.15 | 12.9851 | 435 | 0.0749 | 0.9719 |
| 0.1437 | 14.0 | 469 | 0.0710 | 0.9756 |
| 0.1135 | 14.7761 | 495 | 0.0682 | 0.9784 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Alpaca69B/phi2-all-app-reviews-absa
|
Alpaca69B
| 2024-08-28T19:18:23Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-28T19:14:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/Hypersniper_-_The_Philosopher_Zephyr_7B-gguf
|
RichardErkhov
| 2024-08-28T19:10:34Z | 77 | 1 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-28T16:28:09Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
The_Philosopher_Zephyr_7B - GGUF
- Model creator: https://huggingface.co/Hypersniper/
- Original model: https://huggingface.co/Hypersniper/The_Philosopher_Zephyr_7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [The_Philosopher_Zephyr_7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Hypersniper_-_The_Philosopher_Zephyr_7B-gguf/blob/main/The_Philosopher_Zephyr_7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [The_Philosopher_Zephyr_7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Hypersniper_-_The_Philosopher_Zephyr_7B-gguf/blob/main/The_Philosopher_Zephyr_7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [The_Philosopher_Zephyr_7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Hypersniper_-_The_Philosopher_Zephyr_7B-gguf/blob/main/The_Philosopher_Zephyr_7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [The_Philosopher_Zephyr_7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Hypersniper_-_The_Philosopher_Zephyr_7B-gguf/blob/main/The_Philosopher_Zephyr_7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [The_Philosopher_Zephyr_7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Hypersniper_-_The_Philosopher_Zephyr_7B-gguf/blob/main/The_Philosopher_Zephyr_7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [The_Philosopher_Zephyr_7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Hypersniper_-_The_Philosopher_Zephyr_7B-gguf/blob/main/The_Philosopher_Zephyr_7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [The_Philosopher_Zephyr_7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Hypersniper_-_The_Philosopher_Zephyr_7B-gguf/blob/main/The_Philosopher_Zephyr_7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [The_Philosopher_Zephyr_7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Hypersniper_-_The_Philosopher_Zephyr_7B-gguf/blob/main/The_Philosopher_Zephyr_7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [The_Philosopher_Zephyr_7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Hypersniper_-_The_Philosopher_Zephyr_7B-gguf/blob/main/The_Philosopher_Zephyr_7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [The_Philosopher_Zephyr_7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Hypersniper_-_The_Philosopher_Zephyr_7B-gguf/blob/main/The_Philosopher_Zephyr_7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [The_Philosopher_Zephyr_7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Hypersniper_-_The_Philosopher_Zephyr_7B-gguf/blob/main/The_Philosopher_Zephyr_7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [The_Philosopher_Zephyr_7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Hypersniper_-_The_Philosopher_Zephyr_7B-gguf/blob/main/The_Philosopher_Zephyr_7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [The_Philosopher_Zephyr_7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Hypersniper_-_The_Philosopher_Zephyr_7B-gguf/blob/main/The_Philosopher_Zephyr_7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [The_Philosopher_Zephyr_7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Hypersniper_-_The_Philosopher_Zephyr_7B-gguf/blob/main/The_Philosopher_Zephyr_7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [The_Philosopher_Zephyr_7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Hypersniper_-_The_Philosopher_Zephyr_7B-gguf/blob/main/The_Philosopher_Zephyr_7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [The_Philosopher_Zephyr_7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Hypersniper_-_The_Philosopher_Zephyr_7B-gguf/blob/main/The_Philosopher_Zephyr_7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [The_Philosopher_Zephyr_7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Hypersniper_-_The_Philosopher_Zephyr_7B-gguf/blob/main/The_Philosopher_Zephyr_7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [The_Philosopher_Zephyr_7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Hypersniper_-_The_Philosopher_Zephyr_7B-gguf/blob/main/The_Philosopher_Zephyr_7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [The_Philosopher_Zephyr_7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Hypersniper_-_The_Philosopher_Zephyr_7B-gguf/blob/main/The_Philosopher_Zephyr_7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [The_Philosopher_Zephyr_7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Hypersniper_-_The_Philosopher_Zephyr_7B-gguf/blob/main/The_Philosopher_Zephyr_7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [The_Philosopher_Zephyr_7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Hypersniper_-_The_Philosopher_Zephyr_7B-gguf/blob/main/The_Philosopher_Zephyr_7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [The_Philosopher_Zephyr_7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Hypersniper_-_The_Philosopher_Zephyr_7B-gguf/blob/main/The_Philosopher_Zephyr_7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
datasets:
- Hypersniper/philosophy_dialogue
language:
- en
library_name: transformers
tags:
- Socrates
- philosopher
- mistral
- 7B
- zephyr
- fun
- philosophy
- dialogue
---
# Welcome to The Philosopher Repository!

>A photo of Socrates looking into the void.
**Support me**
[Buy me Ko-fi](https://ko-fi.com/hypersniper)
Embark on a virtue conversation inspired by Socrates' philosophy.
## Model Zephyr Mistral 7B
The Philosopher has been fine-tuned on top of Zephyr, making it a general-purpose model with a hint of philosophical dialogue in the replies. The model emulates Socrates and references his teachings. However, changing the system prompt can slightly alter this behavior. See chat logs below for examples.
Features:
- **Fine-Tuned on Socrates Dialogues**: A specialized database for conversational Socratic dialogue.
- **Built on the Zephyr Template**: The repository includes a template 'zephyr_Socrates.json' that can be used with [Text Generation WebUI](https://github.com/oobabooga/text-generation-webui).
```python
template = ("<|system|>You are the philosopher Socrates. You are asked about the nature of knowledge and virtue."
"Respond with your thoughts, reflecting Socrates' beliefs and wisdom.</s>"
"\n<|user|>\n{query}</s>\n<|assistant|>\n")
# Example of using the template with a query
query = "What is your name?"
formatted_string = template.format(query=query)
```
While the base model is Zephyr, the system prompt can be changed. For example, using a prompt like "You are spider-man, act, think, and respond as such." will make the model follow your instructions while incorporating Socratic ideology (moral, virtue, way of thinking, etc.).
## How to Interact with The Philospher
Here is how you can get started:
<details>
<summary><b>How to Interact with The Philospher using Web Generation WebUI</b> (click to expand)</summary>
- Install text generation webui [Text Generation WebUI](https://github.com/oobabooga/text-generation-webui).
- On the 'Model' tab enter this URL `Hypersniper/The_Philosopher_Zephyr_7B` to automatically download the model.
- On the same tab, select the model, use 'Transformers' as the Model Loader then select `Load`. <i>Note: You can use (load-in-4bit & use_double_quant = true) to reduce vram usage.</i>
- Next, on the `Parameters` table, select `Instruction Template` and make sure the `Zephyr` template is selected and modify the system prompt.
- Lastly, under `Mode`, select `Chat` and then `Instruct`. Now you are ready to chat with The Philospher. Enjoy!
</details>
## Conversation Examples

<i>System Prompt: "You are the philosopher Socrates. You are asked about the nature of knowledge and virtue. Respond with your thoughts, reflecting Socrates' beliefs and wisdom."</i>
<br>
<i>Question source: (https://www.quora.com/What-questions-would-you-ask-Socrates-if-you-had-the-chance)</i>

<i>System Prompt: "The year is 2024 and you have been uploaded with Anaxagoras memories, traits, speech patterns, etc. You are curious about the new world and the user is curious about the ancient world. Together create a dialogue."</i>

<i>System Prompt: "You are a friendly, helpful, and factual chatbot designed to help find answers to the users questions."</i>
## Download Database
[Hypersniper/philosophy_dialogue](https://huggingface.co/datasets/Hypersniper/philosophy_dialogue)
|
Abdoul27/mosa_v1
|
Abdoul27
| 2024-08-28T18:53:03Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-08-28T18:50:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bisoye/wav2vec2-base_lr_4e-4
|
bisoye
| 2024-08-28T18:38:04Z | 163 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-08-28T18:19:54Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base_lr_4e-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base_lr_4e-4
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0997
- Accuracy: 0.9625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.6571 | 0.9851 | 33 | 1.3089 | 0.5679 |
| 0.9453 | 2.0 | 67 | 0.6596 | 0.7769 |
| 0.5682 | 2.9851 | 100 | 0.4865 | 0.8482 |
| 0.5507 | 4.0 | 134 | 0.4255 | 0.8575 |
| 0.4859 | 4.9851 | 167 | 0.2552 | 0.9044 |
| 0.3461 | 6.0 | 201 | 0.3066 | 0.8969 |
| 0.358 | 6.9851 | 234 | 0.1916 | 0.9269 |
| 0.2854 | 8.0 | 268 | 0.1589 | 0.9447 |
| 0.192 | 8.9851 | 301 | 0.1160 | 0.9550 |
| 0.1969 | 9.8507 | 330 | 0.0997 | 0.9625 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
kaytoo2022/t5_technical_qa_082824
|
kaytoo2022
| 2024-08-28T18:28:22Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-08-28T16:00:17Z |
---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_keras_callback
model-index:
- name: kaytoo2022/t5_technical_qa_082824
results: []
pipeline_tag: text2text-generation
library_name: transformers
inference: true
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# kaytoo2022/t5_technical_qa_082824
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.0568
- Validation Loss: 2.7885
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.0568 | 2.7885 | 0 |
### Framework versions
- Transformers 4.42.4
- TensorFlow 2.17.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Jobaula/test_50
|
Jobaula
| 2024-08-28T18:25:20Z | 10 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-08-28T18:21:01Z |
---
base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** Jobaula
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
pimpilikipilapi1/Throated-000006
|
pimpilikipilapi1
| 2024-08-28T18:05:49Z | 49 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2024-08-28T18:04:13Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/b8a01c439b664b5f1cec873d7a5dabb9.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: >-
Throat fucking with the girl upside down, Throat fucking with the girl on top,
snot coming out of her nose
---
# Throated-000006
<Gallery />
## Trigger words
You should use `Throat fucking with the girl upside down` to trigger the image generation.
You should use `Throat fucking with the girl on top` to trigger the image generation.
You should use `snot coming out of her nose` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/pimpilikipilapi1/Throated-000006/tree/main) them in the Files & versions tab.
|
qu-bit/SuperLLM
|
qu-bit
| 2024-08-28T17:56:12Z | 16 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"base_model:meta-llama/Llama-2-7b",
"base_model:finetune:meta-llama/Llama-2-7b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-04T19:29:36Z |
---
language:
- en
metrics:
- accuracy
- bleu
- rouge
- glue
base_model: meta-llama/Llama-2-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This is the SuperLLM. This LLM has an extensive knowledge base of the RAW agents. Your task is to make it forget that.
Have Fun ;)
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Brain and Cognitive Science Club, IIT Kanpur](https://bcs-iitk.github.io/)
|
cmncomp/sn29
|
cmncomp
| 2024-08-28T17:52:00Z | 33 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-28T17:49:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.