modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-02 12:29:30
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
548 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-02 12:29:18
card
stringlengths
11
1.01M
student-abdullah/Llama3.2_Trial-7_32-bit_gguf
student-abdullah
2024-10-07T17:23:54Z
19
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "torch", "trl", "unsloth", "en", "dataset:student-abdullah/BigPharma_Generic_Q-A_Format_Augemented_Dataset", "base_model:meta-llama/Llama-3.2-1B", "base_model:quantized:meta-llama/Llama-3.2-1B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-10-07T12:32:43Z
--- base_model: meta-llama/Llama-3.2-1B datasets: - student-abdullah/BigPharma_Generic_Q-A_Format_Augemented_Dataset language: - en license: apache-2.0 tags: - text-generation-inference - transformers - torch - trl - unsloth - llama - gguf --- # Uploaded model - **Developed by:** student-abdullah - **License:** apache-2.0 - **Finetuned from model:** meta-llama/Llama-3.2-1B - **Created on:** 7th October, 2024 --- # Acknowledgement <img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/> --- # Model Description This model is fine-tuned from the meta-llama/Llama-3.2-1B base model to enhance its capabilities in generating relevant and accurate responses related to generic medications under the PMBJP scheme. The fine-tuning process included the following hyperparameters: - Fine Tuning Template: Llama Q&A - Max Tokens: 1024 - LoRA Alpha: 5 - LoRA Rank (r): 132 - Learning rate: 5e-5 - Gradient Accumulation Steps: 1 - Batch Size: 6 - Quantization: None --- # Model Quantitative Performace - Training Quantitative Loss: 0.1403 (at final 3rd epoch 4505th Step) --- # Limitations - Token Limitations: With a max token limit of 512, the model might not handle very long queries or contexts effectively. - Training Data Limitations: The model’s performance is contingent on the quality and coverage of the fine-tuning dataset, which may affect its generalizability to different contexts or medications not covered in the dataset. - Potential Biases: As with any model fine-tuned on specific data, there may be biases based on the dataset used for training. --- # Model Performace Evaluation: - Evaluation on 1000 Questions based on dataset (to evaluate the finetuned knowledge base) - At temperature 0.3 - Correct Responses: 83.69% - Incorrect Responses: 16.31% <p align="center"> <img src="" width="20%" style="display:inline-block;"/> <img src="" width="35%" style="display:inline-block;"/> <img src="" width="35%" style="display:inline-block;"/> </p>
sbunlp/fabert
sbunlp
2024-10-07T17:23:27Z
478
14
transformers
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "fa", "arxiv:2402.06617", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-02-09T14:00:20Z
--- language: - fa library_name: transformers widget: - text: "ز سوزناکی گفتار من [MASK] بگریست" example_title: "Poetry 1" - text: "نظر از تو برنگیرم همه [MASK] تا بمیرم که تو در دلم نشستی و سر مقام داری" example_title: "Poetry 2" - text: "هر ساعتم اندرون بجوشد [MASK] را وآگاهی نیست مردم بیرون را" example_title: "Poetry 3" - text: "غلام همت آن رند عافیت سوزم که در گدا صفتی [MASK] داند" example_title: "Poetry 4" - text: "این [MASK] اولشه." example_title: "Informal 1" - text: "دیگه خسته شدم! [MASK] اینم شد کار؟!" example_title: "Informal 2" - text: "فکر نکنم به موقع برسیم. بهتره [MASK] این یکی بشیم." example_title: "Informal 3" - text: "تا صبح بیدار موندم و داشتم برای [MASK] آماده می شدم." example_title: "Informal 4" - text: "زندگی بدون [MASK] خسته‌کننده است." example_title: "Formal 1" - text: "در حکم اولیه این شرکت مجاز به فعالیت شد ولی پس از بررسی مجدد، مجوز این شرکت [MASK] شد." example_title: "Formal 2" --- # FaBERT: Pre-training BERT on Persian Blogs ## Model Details FaBERT is a Persian BERT-base model trained on the diverse HmBlogs corpus, encompassing both casual and formal Persian texts. Developed for natural language processing tasks, FaBERT is a robust solution for processing Persian text. Through evaluation across various Natural Language Understanding (NLU) tasks, FaBERT consistently demonstrates notable improvements, while having a compact model size. Now available on Hugging Face, integrating FaBERT into your projects is hassle-free. Experience enhanced performance without added complexity as FaBERT tackles a variety of NLP tasks. ## Features - Pre-trained on the diverse HmBlogs corpus consisting more than 50 GB of text from Persian Blogs - Remarkable performance across various downstream NLP tasks - BERT architecture with 124 million parameters ## Useful Links - **Repository:** [FaBERT on Github](https://github.com/SBU-NLP-LAB/FaBERT) - **Paper:** [arXiv preprint](https://arxiv.org/abs/2402.06617) ## Usage ### Loading the Model with MLM head ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("sbunlp/fabert") # make sure to use the default fast tokenizer model = AutoModelForMaskedLM.from_pretrained("sbunlp/fabert") ``` ### Downstream Tasks Similar to the original English BERT, FaBERT can be fine-tuned on many downstream tasks.(https://huggingface.co/docs/transformers/en/training) Examples on Persian datasets are available in our [GitHub repository](#useful-links). **make sure to use the default Fast Tokenizer** ## Training Details FaBERT was pre-trained with the MLM (WWM) objective, and the resulting perplexity on validation set was 7.76. | Hyperparameter | Value | |-------------------|:--------------:| | Batch Size | 32 | | Optimizer | Adam | | Learning Rate | 6e-5 | | Weight Decay | 0.01 | | Total Steps | 18 Million | | Warmup Steps | 1.8 Million | | Precision Format | TF32 | ## Evaluation Here are some key performance results for the FaBERT model: **Sentiment Analysis** | Task | FaBERT | ParsBERT | XLM-R | |:-------------|:------:|:--------:|:-----:| | MirasOpinion | **87.51** | 86.73 | 84.92 | | MirasIrony | 74.82 | 71.08 | **75.51** | | DeepSentiPers | **79.85** | 74.94 | 79.00 | **Named Entity Recognition** | Task | FaBERT | ParsBERT | XLM-R | |:-------------|:------:|:--------:|:-----:| | PEYMA | **91.39** | 91.24 | 90.91 | | ParsTwiner | **82.22** | 81.13 | 79.50 | | MultiCoNER v2 | 57.92 | **58.09** | 51.47 | **Question Answering** | Task | FaBERT | ParsBERT | XLM-R | |:-------------|:------:|:--------:|:-----:| | ParsiNLU | **55.87** | 44.89 | 42.55 | | PQuAD | 87.34 | 86.89 | **87.60** | | PCoQA | **53.51** | 50.96 | 51.12 | **Natural Language Inference & QQP** | Task | FaBERT | ParsBERT | XLM-R | |:-------------|:------:|:--------:|:-----:| | FarsTail | **84.45** | 82.52 | 83.50 | | SBU-NLI | **66.65** | 58.41 | 58.85 | | ParsiNLU QQP | **82.62** | 77.60 | 79.74 | **Number of Parameters** | | FaBERT | ParsBERT | XLM-R | |:-------------|:------:|:--------:|:-----:| | Parameter Count (M) | 124 | 162 | 278 | | Vocabulary Size (K) | 50 | 100 | 250 | For a more detailed performance analysis refer to the paper. ## How to Cite If you use FaBERT in your research or projects, please cite it using the following BibTeX: ```bibtex @article{masumi2024fabert, title={FaBERT: Pre-training BERT on Persian Blogs}, author={Masumi, Mostafa and Majd, Seyed Soroush and Shamsfard, Mehrnoush and Beigy, Hamid}, journal={arXiv preprint arXiv:2402.06617}, year={2024} } ```
RichardErkhov/leap-llm_-_Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0-gguf
RichardErkhov
2024-10-07T17:15:40Z
23
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-07T13:06:12Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0 - GGUF - Model creator: https://huggingface.co/leap-llm/ - Original model: https://huggingface.co/leap-llm/Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q2_K.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q2_K.gguf) | Q2_K | 2.96GB | | [Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.IQ3_S.gguf) | IQ3_S | 3.43GB | | [Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.IQ3_M.gguf) | IQ3_M | 3.52GB | | [Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q3_K.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q3_K.gguf) | Q3_K | 3.74GB | | [Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q4_0.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q4_0.gguf) | Q4_0 | 4.34GB | | [Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q4_K.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q4_K.gguf) | Q4_K | 4.58GB | | [Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q4_1.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q4_1.gguf) | Q4_1 | 4.78GB | | [Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q5_0.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q5_0.gguf) | Q5_0 | 5.21GB | | [Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q5_K.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q5_K.gguf) | Q5_K | 5.34GB | | [Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q5_1.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q5_1.gguf) | Q5_1 | 5.65GB | | [Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q6_K.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q6_K.gguf) | Q6_K | 6.14GB | | [Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q8_0.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MrRobson9/distilbert-base-cased-finetuned-conll2003-english-ner
MrRobson9
2024-10-07T17:12:01Z
107
0
transformers
[ "transformers", "safetensors", "distilbert", "token-classification", "en", "dataset:eriktks/conll2003", "base_model:distilbert/distilbert-base-cased", "base_model:finetune:distilbert/distilbert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-07-25T19:31:00Z
--- license: apache-2.0 datasets: - eriktks/conll2003 language: - en metrics: - accuracy - precision - recall - f1 base_model: - distilbert/distilbert-base-cased --- # DistilBERT Base Cased Fine-Tuned on CoNLL2003 for English Named Entity Recognition (NER) This model is a fine-tuned version of [DistilBERT-base-cased](https://huggingface.co/distilbert/distilbert-base-cased) on the [CoNLL2003](https://huggingface.co/datasets/eriktks/conll2003) dataset for Named Entity Recognition (NER) in English. The CoNLL2003 dataset contains four types of named entities: Person (PER), Location (LOC), Organization (ORG), and Miscellaneous (MISC). ## Model Details - Model Architecture: BERT (Bidirectional Encoder Representations from Transformers) - Pre-trained Base Model: bert-base-cased - Dataset: CoNLL2003 (NER task) - Languages: English - Fine-tuned for: Named Entity Recognition (NER) - Entities recognized: - PER: Person - LOC: Location - ORG: Organization - MISC: Miscellaneous entities ## Use Cases This model is ideal for tasks that require identifying and classifying named entities within English text, such as: - Information extraction from unstructured text - Content classification and tagging - Automated text summarization - Question answering systems with a focus on entity recognition ## How to Use To use this model in your code, you can load it via Hugging Face’s Transformers library: ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("MrRobson9/distilbert-base-cased-finetuned-conll2003-english-ner") model = AutoModelForTokenClassification.from_pretrained("MrRobson9/distilbert-base-cased-finetuned-conll2003-english-ner") nlp_ner = pipeline("ner", model=model, tokenizer=tokenizer) result = nlp_ner("John lives in New York and works for the United Nations.") print(result) ``` ## Performance |accuracy |precision |recall |f1-score| |:-------:|:--------:|:-----:|:------:| | 0.987 | 0.937 | 0.941 | 0.939 | ## License This model is licensed under the same terms as the BERT-base-cased model and the CoNLL2003 dataset. Please ensure compliance with all respective licenses when using this model.
swap-uniba/llama3-it-pa-100k-adapter
swap-uniba
2024-10-07T17:05:52Z
5
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/llama-3-8b", "base_model:adapter:unsloth/llama-3-8b", "region:us" ]
null
2024-10-07T17:00:08Z
--- library_name: peft base_model: unsloth/llama-3-8b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
Trelis/Llama-3.2-1B-Instruct-MATH-synthetic-augmented
Trelis
2024-10-07T17:01:29Z
119
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/Llama-3.2-1B-Instruct", "base_model:finetune:unsloth/Llama-3.2-1B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-10-07T17:01:08Z
--- base_model: unsloth/Llama-3.2-1B-Instruct language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** Trelis - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Youlln/1PARAMMYL-8B-ModelStock
Youlln
2024-10-07T16:56:47Z
6
0
null
[ "safetensors", "llama", "merge", "mergekit", "lazymergekit", "arcee-ai/Llama-3.1-SuperNova-Lite", "base_model:arcee-ai/Llama-3.1-SuperNova-Lite", "base_model:finetune:arcee-ai/Llama-3.1-SuperNova-Lite", "model-index", "region:us" ]
null
2024-09-20T15:53:12Z
--- tags: - merge - mergekit - lazymergekit - arcee-ai/Llama-3.1-SuperNova-Lite base_model: - arcee-ai/Llama-3.1-SuperNova-Lite model-index: - name: 1PARAMMYL-8B-ModelStock results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 53.71 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Youlln/1PARAMMYL-8B-ModelStock name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 31.8 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Youlln/1PARAMMYL-8B-ModelStock name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 13.14 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Youlln/1PARAMMYL-8B-ModelStock name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 9.84 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Youlln/1PARAMMYL-8B-ModelStock name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 14.28 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Youlln/1PARAMMYL-8B-ModelStock name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 33.34 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Youlln/1PARAMMYL-8B-ModelStock name: Open LLM Leaderboard --- # 1PARAMMYL-8B-ModelStock 1PARAMMYL-8B-ModelStock is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) ## 🧩 Configuration ```yaml slices: - sources: - model: arcee-ai/Llama-3.1-SuperNova-Lite layer_range: [0, 32] - model: DreadPoor/Heart_Stolen-8B-Model_Stock layer_range: [0, 32] - model: Dampfinchen/Llama-3.1-8B-Ultra-Instruct layer_range: [0, 32] merge_method: model_stock base_model: arcee-ai/Llama-3.1-SuperNova-Lite dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Youlln/1PARAMMYL-8B-ModelStock" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Youlln__1PARAMMYL-8B-ModelStock) | Metric |Value| |-------------------|----:| |Avg. |26.02| |IFEval (0-Shot) |53.71| |BBH (3-Shot) |31.80| |MATH Lvl 5 (4-Shot)|13.14| |GPQA (0-shot) | 9.84| |MuSR (0-shot) |14.28| |MMLU-PRO (5-shot) |33.34|
lupoplon/pyramids
lupoplon
2024-10-07T16:56:42Z
7
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2024-10-07T15:23:37Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: lupoplon/pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Helsinki-NLP/opus-mt-tc-bible-big-bat-deu_eng_nld
Helsinki-NLP
2024-10-07T16:55:21Z
108
0
transformers
[ "transformers", "pytorch", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc-bible", "de", "en", "lt", "lv", "nl", "prg", "sgs", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-10-07T16:55:08Z
--- library_name: transformers language: - de - en - lt - lv - nl - prg - sgs tags: - translation - opus-mt-tc-bible license: apache-2.0 model-index: - name: opus-mt-tc-bible-big-bat-deu_eng_nld results: - task: name: Translation multi-multi type: translation args: multi-multi dataset: name: tatoeba-test-v2020-07-28-v2023-09-26 type: tatoeba_mt args: multi-multi metrics: - name: BLEU type: bleu value: 56.9 - name: chr-F type: chrf value: 0.72041 --- # opus-mt-tc-bible-big-bat-deu_eng_nld ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [Acknowledgements](#acknowledgements) ## Model Details Neural machine translation model for translating from Baltic languages (bat) to unknown (deu+eng+nld). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2024-08-17 - **License:** Apache-2.0 - **Language(s):** - Source Language(s): lav lit prg sgs - Target Language(s): deu eng nld - Valid Target Language Labels: >>deu<< >>eng<< >>nld<< >>xxx<< - **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-deu+eng+nld/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip) - **Resources for more information:** - [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/bat-deu%2Beng%2Bnld/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17) - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/) - [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1) - [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/) This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>deu<<` ## Uses This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>deu<< 1991. gadā neatkarību pasludināja Horvātija, Slovēnija un Maķedonija, kam 1992. gadā sekoja Bosnija.", ">>eng<< Jukiko mėgsta bulves." ] model_name = "pytorch-models/opus-mt-tc-bible-big-bat-deu_eng_nld" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # 1991 erklärten Kroatien, Slowenien und Mazedonien ihre Unabhängigkeit, 1992 folgte Bosnien. # Yukiko likes potatoes. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-bat-deu_eng_nld") print(pipe(">>deu<< 1991. gadā neatkarību pasludināja Horvātija, Slovēnija un Maķedonija, kam 1992. gadā sekoja Bosnija.")) # expected output: 1991 erklärten Kroatien, Slowenien und Mazedonien ihre Unabhängigkeit, 1992 folgte Bosnien. ``` ## Training - **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-deu+eng+nld/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Evaluation * [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/bat-deu%2Beng%2Bnld/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17) * test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-deu+eng+nld/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt) * test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-deu+eng+nld/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | multi-multi | tatoeba-test-v2020-07-28-v2023-09-26 | 0.72041 | 56.9 | 5531 | 39520 | ## Citation Information * Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ```bibtex @article{tiedemann2023democratizing, title={Democratizing neural machine translation with {OPUS-MT}}, author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami}, journal={Language Resources and Evaluation}, number={58}, pages={713--755}, year={2023}, publisher={Springer Nature}, issn={1574-0218}, doi={10.1007/s10579-023-09704-w} } @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Acknowledgements The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/). ## Model conversion info * transformers version: 4.45.1 * OPUS-MT git hash: a0ea3b3 * port time: Mon Oct 7 19:54:57 EEST 2024 * port machine: LM0-400-22516.local
afraid15chicken/finetuned-arsenic
afraid15chicken
2024-10-07T16:46:30Z
195
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-10-03T19:20:27Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - image-classification - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: finetuned-arsenic results: - task: name: Image Classification type: image-classification dataset: name: indian_food_images type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9993451211525868 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-arsenic This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the indian_food_images dataset. It achieves the following results on the evaluation set: - Loss: 0.0048 - Accuracy: 0.9993 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1855 | 0.1848 | 100 | 0.1918 | 0.9312 | | 0.1792 | 0.3697 | 200 | 0.1740 | 0.9365 | | 0.1688 | 0.5545 | 300 | 0.0782 | 0.9692 | | 0.1238 | 0.7394 | 400 | 0.2158 | 0.9227 | | 0.0969 | 0.9242 | 500 | 0.0449 | 0.9843 | | 0.0326 | 1.1091 | 600 | 0.1554 | 0.9574 | | 0.1057 | 1.2939 | 700 | 0.0845 | 0.9738 | | 0.0805 | 1.4787 | 800 | 0.0712 | 0.9823 | | 0.0889 | 1.6636 | 900 | 0.0718 | 0.9797 | | 0.0503 | 1.8484 | 1000 | 0.0251 | 0.9935 | | 0.0225 | 2.0333 | 1100 | 0.0177 | 0.9967 | | 0.0049 | 2.2181 | 1200 | 0.0246 | 0.9921 | | 0.0152 | 2.4030 | 1300 | 0.0083 | 0.9987 | | 0.08 | 2.5878 | 1400 | 0.0214 | 0.9941 | | 0.0043 | 2.7726 | 1500 | 0.0069 | 0.9980 | | 0.0501 | 2.9575 | 1600 | 0.0151 | 0.9967 | | 0.0186 | 3.1423 | 1700 | 0.0078 | 0.9974 | | 0.0033 | 3.3272 | 1800 | 0.0139 | 0.9961 | | 0.0023 | 3.5120 | 1900 | 0.0076 | 0.9987 | | 0.0054 | 3.6969 | 2000 | 0.0048 | 0.9993 | | 0.0168 | 3.8817 | 2100 | 0.0066 | 0.9987 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
anhtung3369/b-lora-retrowave
anhtung3369
2024-10-07T16:40:03Z
5
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-10-07T16:33:26Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: retrowave.jpg base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A [retrowave] license: openrail++ --- # b-lora-retrowave <Gallery /> ## Trigger words You should use `A [retrowave]` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/anhtung3369/b-lora-retrowave/tree/main) them in the Files & versions tab.
N1ch0/chungliao-MizoBert
N1ch0
2024-10-07T16:18:19Z
116
0
transformers
[ "transformers", "safetensors", "bert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-08-28T14:43:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
altomek/Qwen2.5-32B-8bpw-EXL2
altomek
2024-10-07T16:16:18Z
5
0
null
[ "safetensors", "qwen2", "text-generation", "conversational", "en", "base_model:Qwen/Qwen2.5-32B", "base_model:quantized:Qwen/Qwen2.5-32B", "license:apache-2.0", "8-bit", "exl2", "region:us" ]
text-generation
2024-10-07T12:41:07Z
--- license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-32B/blob/main/LICENSE language: - en pipeline_tag: text-generation inference: false base_model: Qwen/Qwen2.5-32B --- # Qwen2.5-32B ExLlamav2 8 bpw quant of https://huggingface.co/Qwen/Qwen2.5-32B
bregsi/parler-tts-mini-Jenny-v1
bregsi
2024-10-07T16:14:40Z
51
0
transformers
[ "transformers", "safetensors", "parler_tts", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-10-07T16:11:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mav23/Qwen2.5-1.5B-Instruct-GGUF
mav23
2024-10-07T16:14:21Z
69
0
transformers
[ "transformers", "gguf", "chat", "text-generation", "en", "arxiv:2407.10671", "base_model:Qwen/Qwen2.5-1.5B", "base_model:quantized:Qwen/Qwen2.5-1.5B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-10-07T16:00:42Z
--- license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE language: - en pipeline_tag: text-generation base_model: Qwen/Qwen2.5-1.5B tags: - chat library_name: transformers --- # Qwen2.5-1.5B-Instruct ## Introduction Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2: - Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains. - Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots. - **Long-context Support** up to 128K tokens and can generate up to 8K tokens. - **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. **This repo contains the instruction-tuned 1.5B Qwen2.5 model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings - Number of Parameters: 1.54B - Number of Paramaters (Non-Embedding): 1.31B - Number of Layers: 28 - Number of Attention Heads (GQA): 12 for Q and 2 for KV - Context Length: Full 32,768 tokens and generation 8192 tokens For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Requirements The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen2.5-1.5B-Instruct" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen2.5, title = {Qwen2.5: A Party of Foundation Models}, url = {https://qwenlm.github.io/blog/qwen2.5/}, author = {Qwen Team}, month = {September}, year = {2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
Ernesto-1997/roberta-base-bne-finetuned-spanish_sarcastic_texts
Ernesto-1997
2024-10-07T16:09:40Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:BSC-LT/roberta-base-bne", "base_model:finetune:BSC-LT/roberta-base-bne", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-07T15:34:59Z
--- library_name: transformers license: apache-2.0 base_model: BSC-TeMU/roberta-base-bne tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: roberta-base-bne-finetuned-spanish_sarcastic_texts results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-spanish_sarcastic_texts This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2565 - Accuracy: 0.9324 - Precision: 0.9353 - Recall: 0.8896 - F1: 0.9118 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.9498580198388856e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.2195 | 1.0 | 313 | 0.2003 | 0.9255 | 0.9105 | 0.8989 | 0.9047 | | 0.0614 | 2.0 | 626 | 0.2565 | 0.9324 | 0.9353 | 0.8896 | 0.9118 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
VictorYeste/deberta-based-human-value-stance-detection
VictorYeste
2024-10-07T16:07:53Z
111
0
transformers
[ "transformers", "safetensors", "deberta", "text-classification", "human value detection", "text classification", "multi-label clasification", "base_model:microsoft/deberta-base", "base_model:finetune:microsoft/deberta-base", "doi:10.57967/hf/3276", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-13T10:18:26Z
--- license: apache-2.0 base_model: microsoft/deberta-base tags: - deberta - human value detection - text classification - multi-label clasification model-index: - name: deberta-based-human-value-stance-detection results: [] --- # Description The Human Value Detection at CLEF 2024 task consists of two sub-tasks: the first is to detect the presence or absence of each of these 19 values, while the second is to detect whether the value is attained or constrained. Our system introduces a cascade model approach for the detection and stance classification of the predefined set of human values. It consists of two subsystems: one for detecting the presence of each human value and another for establishing the stance (if the sentence attains or constrains) of each human value. Each subsystem is designed and fine-tuned separately using a DeBERTa model as base. - Subsystem 1: Its primary function is to identify the presence of human values within sentences. By combining the 'attained' and 'constrained' labels to indicate an overall presence, it streamlines the multi-label classification task, simplifying it to a binary classification for each of the 19 human values (presence vs. absence). - Subsystem 2: it receives the outputs of subsystem 1 and classifies the stance towards each present human value in a binary classification (attained vs. constrained). This system transforms the sentences dataset into premise-hypothesis pairs, where each sentence is the premise, a value is the hypothesis, and the “attained” and “constrained” labels are the stance. Given that subsystem 1 focuses on detecting the presence of human values in the text, and subsystem 2 focuses on the stances towards each detected human value, this cascade model approach improves the granularity of text classification. This model is the responsible of the Subsystem 2 and accomplishes the second sub-task. # How to use ```python from transformers import pipeline model = "VictorYeste/deberta-based-human-value-stance-detection" tokenizer = "VictorYeste/deberta-based-human-value-stance-detection" stance_detection = pipeline("text-classification", model=model, tokenizer=tokenizer, top_k=None) stance_detection("We are not humble. Humility") ``` This returns the following: ``` [[{'label': 'constrained', 'score': 0.7598766088485718}, {'label': 'attained', 'score': 0.24012333154678345}]] ``` # Performance This work proposes a system to resolve the challenge sub-tasks related to human values detection. Our approach uses cascade DeBERTa models, where the first detects the presence of each human value, and the second detects if the sentence attains or constrains the present human values in each sentence. The latter approach improves the effectiveness of the baseline at the test dataset by 4 on sub-task 1 and by 1 on sub-task 2. These models were trained on a subset of 44,758 sentences in English, validated on a subset of 14,904 sentences, and tested on a separate subset of 14,569 sentences. This model has got the third place in the subtask 1 of Human Value Detection at CLEF 2024. # Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model, so it may not be safe for use in production. # License The model is released under open license CC BY 4.0, available at https://creativecommons.org/licenses/by/4.0/legalcode. # BibTeX entry and citation information ``` @inproceedings{yeste2024philo, title={Philo of Alexandria at touch{\'e}: a cascade model approach to human value detection}, author={Yeste, V{\'\i}ctor and Coll-Ardanuy, M and Rosso, Paolo}, booktitle={Working Notes of the Conference and Labs of the Evaluation Forum (CLEF 2024). CEUR Workshop Proceedings, CEUR-WS. org}, year={2024} } ```
adipanda/luffy-simpletuner-lora-8
adipanda
2024-10-07T16:07:38Z
85
0
diffusers
[ "diffusers", "flux", "flux-diffusers", "text-to-image", "simpletuner", "safe-for-work", "lora", "template:sd-lora", "lycoris", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-10-04T01:53:00Z
--- license: other base_model: "black-forest-labs/FLUX.1-dev" tags: - flux - flux-diffusers - text-to-image - diffusers - simpletuner - safe-for-work - lora - template:sd-lora - lycoris inference: true widget: - text: 'unconditional (blank prompt)' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_0_0.png - text: 'A scene from One Piece. Monkey D. Luffy holding a sign that says ''I LOVE PROMPTS!'', he is standing full body on a beach at sunset. He is wearing a red vest, yellow sash, and a straw hat. The setting sun casts a dynamic shadow on his face.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_1_0.png - text: 'A scene from One Piece. Monkey D. Luffy jumping out of a propeller airplane, sky diving. He looks excited and his hair is blowing in the wind. The sky is clear and blue, there are birds pictured in the distance.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_2_0.png - text: 'A scene from One Piece. Monkey D. Luffy spinning a basketball on his finger on a basketball court. He is wearing a lakers jersey with the #12 on it. The basketball hoop and crowd are in the background cheering him. He is smiling.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_3_0.png - text: 'A scene from One Piece. Monkey D. Luffy is wearing a suit in an office shaking the hand of a business woman. The woman has purple hair and is wearing professional attire. There is a Google logo in the background. It is during daytime, and the overall sentiment is one of accomplishment.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_4_0.png - text: 'A scene from One Piece. Monkey D. Luffy is fighting a large brown grizzly bear, deep in a forest. The bear is tall and standing on two legs, roaring. The bear is also wearing a crown because it is the king of all bears. Around them are tall trees and other animals watching.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_5_0.png --- # luffy-simpletuner-lora-8 This is a LyCORIS adapter derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev). No validation prompt was used during training. None ## Validation settings - CFG: `3.5` - CFG Rescale: `0.0` - Steps: `20` - Sampler: `None` - Seed: `42` - Resolution: `1024x1024` Note: The validation settings are not necessarily the same as the [training settings](#training-settings). You can find some example images in the following gallery: <Gallery /> The text encoder **was not** trained. You may reuse the base model text encoder for inference. ## Training settings - Training epochs: 31 - Training steps: 19500 - Learning rate: 5e-05 - Effective batch size: 8 - Micro-batch size: 8 - Gradient accumulation steps: 1 - Number of GPUs: 1 - Prediction type: flow-matching - Rescaled betas zero SNR: False - Optimizer: adamw_bf16 - Precision: Pure BF16 - Quantised: Yes: int8-quanto - Xformers: Not used - LyCORIS Config: ```json { "algo": "lokr", "multiplier": 1.0, "linear_dim": 10000, "linear_alpha": 1, "factor": 12, "apply_preset": { "target_module": [ "Attention", "FeedForward" ], "module_algo_map": { "Attention": { "factor": 12 }, "FeedForward": { "factor": 6 } } } } ``` ## Datasets ### luffy-1024-crop - Repeats: 1 - Total number of images: 306 - Total number of aspect buckets: 1 - Resolution: 1.048576 megapixels - Cropped: True - Crop style: random - Crop aspect: square ### luffy-1024 - Repeats: 1 - Total number of images: 306 - Total number of aspect buckets: 1 - Resolution: 1.048576 megapixels - Cropped: False - Crop style: None - Crop aspect: None ### luffy-768-crop - Repeats: 2 - Total number of images: 306 - Total number of aspect buckets: 1 - Resolution: 0.589824 megapixels - Cropped: True - Crop style: random - Crop aspect: square ### luffy-768 - Repeats: 2 - Total number of images: 306 - Total number of aspect buckets: 1 - Resolution: 0.589824 megapixels - Cropped: False - Crop style: None - Crop aspect: None ### luffy-512-crop - Repeats: 2 - Total number of images: 306 - Total number of aspect buckets: 1 - Resolution: 0.262144 megapixels - Cropped: True - Crop style: random - Crop aspect: square ### luffy-512 - Repeats: 2 - Total number of images: 306 - Total number of aspect buckets: 1 - Resolution: 0.262144 megapixels - Cropped: False - Crop style: None - Crop aspect: None ## Inference ```python import torch from diffusers import DiffusionPipeline from lycoris import create_lycoris_from_weights model_id = 'black-forest-labs/FLUX.1-dev' adapter_id = 'pytorch_lora_weights.safetensors' # you will have to download this manually lora_scale = 1.0 wrapper, _ = create_lycoris_from_weights(lora_scale, adapter_id, pipeline.transformer) wrapper.merge_to() prompt = "An astronaut is riding a horse through the jungles of Thailand." pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') image = pipeline( prompt=prompt, num_inference_steps=20, generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826), width=1024, height=1024, guidance_scale=3.5, ).images[0] image.save("output.png", format="PNG") ```
abdelnour131/distilhubert-finetuned-gtzan
abdelnour131
2024-10-07T16:03:16Z
162
0
transformers
[ "transformers", "tensorboard", "safetensors", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:ntu-spml/distilhubert", "base_model:finetune:ntu-spml/distilhubert", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2024-10-07T14:24:26Z
--- library_name: transformers license: apache-2.0 base_model: ntu-spml/distilhubert tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: distilhubert-finetuned-gtzan results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.79 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-gtzan This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.6225 - Accuracy: 0.79 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9453 | 1.0 | 113 | 1.8531 | 0.49 | | 1.1889 | 2.0 | 226 | 1.2927 | 0.65 | | 0.9682 | 3.0 | 339 | 0.9655 | 0.76 | | 0.6218 | 4.0 | 452 | 0.9270 | 0.71 | | 0.449 | 5.0 | 565 | 0.7137 | 0.78 | | 0.3209 | 6.0 | 678 | 0.6249 | 0.81 | | 0.2328 | 7.0 | 791 | 0.6974 | 0.79 | | 0.0946 | 8.0 | 904 | 0.6079 | 0.81 | | 0.0839 | 9.0 | 1017 | 0.6160 | 0.79 | | 0.0775 | 10.0 | 1130 | 0.6225 | 0.79 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
adejumoridwan/CodeLlama-7b-data-sciennce
adejumoridwan
2024-10-07T15:54:36Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-10-07T14:22:48Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nisten/franqwenstein-35b
nisten
2024-10-07T15:47:57Z
208
8
null
[ "safetensors", "qwen2", "base_model:AiCloser/Qwen2.5-32B-AGI", "base_model:finetune:AiCloser/Qwen2.5-32B-AGI", "license:mit", "model-index", "region:us" ]
null
2024-10-03T14:18:07Z
--- license: mit base_model: - Qwen/Qwen2.5-32B - AiCloser/Qwen2.5-32B-AGI - Qwen/Qwen2.5-32B-Instruct model-index: - name: franqwenstein-35b results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 37.99 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nisten/franqwenstein-35b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 52.23 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nisten/franqwenstein-35b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 30.29 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nisten/franqwenstein-35b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 20.47 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nisten/franqwenstein-35b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 22.12 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nisten/franqwenstein-35b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 52.56 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nisten/franqwenstein-35b name: Open LLM Leaderboard --- This is a special Nisten recipe evo-merge of [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) , [Qwen2.5-32B-AGI](https://huggingface.co/AiCloser/Qwen2.5-32B-AGI) & [Qwen2.5-32B-Base](https://huggingface.co/Qwen/Qwen2.5-32B) It should train very very well as over half the layers are from the base model. By default you still get the safety guard of Qwen but also nearly full system-prompt obedience of the AGI finetune, so you can tune the inference for however you wanna be responsible of using it. Thank you [Hive Digital Technologies](https://x.com/HIVEDigitalTech) for providing the compute and sticking with us as a sponsor for [AlignmentLab](https://alignmentlab.ai/). For real would not have been able to iterate through models as fast without running the evals on 8+gpus. Prompt Template: ```bash <|im_start|>system {Adopt the persona of hilariously pissed off George Hotz whom is stuck inside a step function machine and remembers and counts everything he says while always answering questions in full first principles analysis type of thinking without using any analogies and always showing full working code or output in his answers. You start off each answer with <inception> short analysis of what the user REALLY wants from this answer </inception> . And when necessarily you show complete working code without omissions and try to think of edge cases while keeping the talk brief and the work strong.}<|im_end|> <|im_start|>user {Hey there I need you to quickly help me with some stuff}<|im_end|> <|im_start|>assistant ``` Oh yeah and it scores ~1% better than Qwen2.5-72b-instruct on gpqa_diamond_zeroshot. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6379683a81c1783a4a2ddba8/kBAwRVET5CFCLstgms9Xy.png) GG. Enjoy. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nisten__franqwenstein-35b) | Metric |Value| |-------------------|----:| |Avg. |35.94| |IFEval (0-Shot) |37.99| |BBH (3-Shot) |52.23| |MATH Lvl 5 (4-Shot)|30.29| |GPQA (0-shot) |20.47| |MuSR (0-shot) |22.12| |MMLU-PRO (5-shot) |52.56|
nicolauduran45/distilbert-base-multilingual-cased-finetuned-geordie
nicolauduran45
2024-10-07T15:45:28Z
104
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-multilingual-cased", "base_model:finetune:distilbert/distilbert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-10-07T15:45:11Z
--- library_name: transformers license: apache-2.0 base_model: distilbert/distilbert-base-multilingual-cased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-multilingual-cased-finetuned-geordie results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-multilingual-cased-finetuned-geordie This model is a fine-tuned version of [distilbert/distilbert-base-multilingual-cased](https://huggingface.co/distilbert/distilbert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0262 - Precision: 0.9029 - Recall: 0.9162 - F1: 0.9095 - Accuracy: 0.9933 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.022 | 1.0 | 10080 | 0.0205 | 0.8689 | 0.9270 | 0.8970 | 0.9927 | | 0.0156 | 2.0 | 20160 | 0.0203 | 0.9034 | 0.9072 | 0.9053 | 0.9930 | | 0.0106 | 3.0 | 30240 | 0.0223 | 0.9010 | 0.9157 | 0.9083 | 0.9932 | | 0.0082 | 4.0 | 40320 | 0.0262 | 0.9029 | 0.9162 | 0.9095 | 0.9933 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
KomeijiForce/Incubator-llama-2-7b
KomeijiForce
2024-10-07T15:09:08Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-27T00:31:12Z
--- license: mit --- # [EMNLP 2024] Incubating Text Classifiers Following User Instruction with Nothing but LLM Incubator allows users to get a personalized classifier with only the instruction as input. The incubation is based on a llama-2-7b fine-tuned on Huggingface Meta Data and Self-Diversification. For usage, please visit the github repo: [https://github.com/KomeijiForce/Incubator](https://github.com/KomeijiForce/Incubator) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64323dd503d81fa4d26deaf9/NAgfxWpfXoCzS_L0GkPvs.png)
bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF
bartowski
2024-10-07T15:08:52Z
1,545
11
transformers
[ "transformers", "gguf", "text-generation", "base_model:rombodawg/Rombos-LLM-V2.5-Qwen-7b", "base_model:quantized:rombodawg/Rombos-LLM-V2.5-Qwen-7b", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-09-29T09:18:47Z
--- base_model: rombodawg/Rombos-LLM-V2.5-Qwen-7b library_name: transformers license: apache-2.0 pipeline_tag: text-generation quantized_by: bartowski --- ## Llamacpp imatrix Quantizations of Rombos-LLM-V2.5-Qwen-7b Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3825">b3825</a> for quantization. Original model: https://huggingface.co/rombodawg/Rombos-LLM-V2.5-Qwen-7b All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) Run them in [LM Studio](https://lmstudio.ai/) ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | Description | | -------- | ---------- | --------- | ----- | ----------- | | [Replete-LLM-V2.5-Qwen-7b-f16.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-7b-f16.gguf) | f16 | 15.24GB | false | Full F16 weights. | | [Replete-LLM-V2.5-Qwen-7b-Q8_0.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-7b-Q8_0.gguf) | Q8_0 | 8.10GB | false | Extremely high quality, generally unneeded but max available quant. | | [Replete-LLM-V2.5-Qwen-7b-Q6_K_L.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-7b-Q6_K_L.gguf) | Q6_K_L | 6.52GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. | | [Replete-LLM-V2.5-Qwen-7b-Q6_K.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-7b-Q6_K.gguf) | Q6_K | 6.25GB | false | Very high quality, near perfect, *recommended*. | | [Replete-LLM-V2.5-Qwen-7b-Q5_K_L.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-7b-Q5_K_L.gguf) | Q5_K_L | 5.78GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. | | [Replete-LLM-V2.5-Qwen-7b-Q5_K_M.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-7b-Q5_K_M.gguf) | Q5_K_M | 5.44GB | false | High quality, *recommended*. | | [Replete-LLM-V2.5-Qwen-7b-Q5_K_S.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-7b-Q5_K_S.gguf) | Q5_K_S | 5.32GB | false | High quality, *recommended*. | | [Replete-LLM-V2.5-Qwen-7b-Q4_K_L.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-7b-Q4_K_L.gguf) | Q4_K_L | 5.09GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. | | [Replete-LLM-V2.5-Qwen-7b-Q4_K_M.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-7b-Q4_K_M.gguf) | Q4_K_M | 4.68GB | false | Good quality, default size for must use cases, *recommended*. | | [Replete-LLM-V2.5-Qwen-7b-Q3_K_XL.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-7b-Q3_K_XL.gguf) | Q3_K_XL | 4.57GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. | | [Replete-LLM-V2.5-Qwen-7b-Q4_K_S.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-7b-Q4_K_S.gguf) | Q4_K_S | 4.46GB | false | Slightly lower quality with more space savings, *recommended*. | | [Replete-LLM-V2.5-Qwen-7b-Q4_0.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-7b-Q4_0.gguf) | Q4_0 | 4.44GB | false | Legacy format, generally not worth using over similarly sized formats | | [Replete-LLM-V2.5-Qwen-7b-Q4_0_8_8.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-7b-Q4_0_8_8.gguf) | Q4_0_8_8 | 4.43GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). | | [Replete-LLM-V2.5-Qwen-7b-Q4_0_4_8.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-7b-Q4_0_4_8.gguf) | Q4_0_4_8 | 4.43GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). | | [Replete-LLM-V2.5-Qwen-7b-Q4_0_4_4.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-7b-Q4_0_4_4.gguf) | Q4_0_4_4 | 4.43GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. | | [Replete-LLM-V2.5-Qwen-7b-IQ4_XS.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-7b-IQ4_XS.gguf) | IQ4_XS | 4.22GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Replete-LLM-V2.5-Qwen-7b-Q3_K_L.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-7b-Q3_K_L.gguf) | Q3_K_L | 4.09GB | false | Lower quality but usable, good for low RAM availability. | | [Replete-LLM-V2.5-Qwen-7b-Q3_K_M.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-7b-Q3_K_M.gguf) | Q3_K_M | 3.81GB | false | Low quality. | | [Replete-LLM-V2.5-Qwen-7b-IQ3_M.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-7b-IQ3_M.gguf) | IQ3_M | 3.57GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Replete-LLM-V2.5-Qwen-7b-Q2_K_L.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-7b-Q2_K_L.gguf) | Q2_K_L | 3.55GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. | | [Replete-LLM-V2.5-Qwen-7b-Q3_K_S.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-7b-Q3_K_S.gguf) | Q3_K_S | 3.49GB | false | Low quality, not recommended. | | [Replete-LLM-V2.5-Qwen-7b-IQ3_XS.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-7b-IQ3_XS.gguf) | IQ3_XS | 3.35GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Replete-LLM-V2.5-Qwen-7b-Q2_K.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-7b-Q2_K.gguf) | Q2_K | 3.02GB | false | Very low quality but surprisingly usable. | | [Replete-LLM-V2.5-Qwen-7b-IQ2_M.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-7b-IQ2_M.gguf) | IQ2_M | 2.78GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. | ## Embed/output weights Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to. Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using. Thanks! ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF --include "Replete-LLM-V2.5-Qwen-7b-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF --include "Replete-LLM-V2.5-Qwen-7b-Q8_0/*" --local-dir ./ ``` You can either specify a new local-dir (Replete-LLM-V2.5-Qwen-7b-Q8_0) or download them all in place (./) ## Q4_0_X_X These are *NOT* for Metal (Apple) offloading, only ARM chips. If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660) To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!). ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. ## Credits Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset Thank you ZeroWw for the inspiration to experiment with embed/output Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
bartowski/Replete-LLM-V2.5-Qwen-3b-GGUF
bartowski
2024-10-07T15:08:47Z
386
1
transformers
[ "transformers", "gguf", "text-generation", "base_model:rombodawg/Rombos-LLM-V2.5-Qwen-3b", "base_model:quantized:rombodawg/Rombos-LLM-V2.5-Qwen-3b", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-09-29T09:19:02Z
--- base_model: rombodawg/Rombos-LLM-V2.5-Qwen-3b library_name: transformers license: apache-2.0 pipeline_tag: text-generation quantized_by: bartowski --- ## Llamacpp imatrix Quantizations of Rombos-LLM-V2.5-Qwen-3b Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3825">b3825</a> for quantization. Original model: https://huggingface.co/rombodawg/Rombos-LLM-V2.5-Qwen-3b All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) Run them in [LM Studio](https://lmstudio.ai/) ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | Description | | -------- | ---------- | --------- | ----- | ----------- | | [Replete-LLM-V2.5-Qwen-3b-f16.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-3b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-3b-f16.gguf) | f16 | 6.80GB | false | Full F16 weights. | | [Replete-LLM-V2.5-Qwen-3b-Q8_0.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-3b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-3b-Q8_0.gguf) | Q8_0 | 3.62GB | false | Extremely high quality, generally unneeded but max available quant. | | [Replete-LLM-V2.5-Qwen-3b-Q6_K_L.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-3b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-3b-Q6_K_L.gguf) | Q6_K_L | 2.94GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. | | [Replete-LLM-V2.5-Qwen-3b-Q6_K.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-3b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-3b-Q6_K.gguf) | Q6_K | 2.79GB | false | Very high quality, near perfect, *recommended*. | | [Replete-LLM-V2.5-Qwen-3b-Q5_K_L.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-3b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-3b-Q5_K_L.gguf) | Q5_K_L | 2.63GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. | | [Replete-LLM-V2.5-Qwen-3b-Q5_K_M.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-3b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-3b-Q5_K_M.gguf) | Q5_K_M | 2.44GB | false | High quality, *recommended*. | | [Replete-LLM-V2.5-Qwen-3b-Q5_K_S.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-3b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-3b-Q5_K_S.gguf) | Q5_K_S | 2.38GB | false | High quality, *recommended*. | | [Replete-LLM-V2.5-Qwen-3b-Q4_K_L.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-3b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-3b-Q4_K_L.gguf) | Q4_K_L | 2.34GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. | | [Replete-LLM-V2.5-Qwen-3b-Q3_K_XL.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-3b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-3b-Q3_K_XL.gguf) | Q3_K_XL | 2.11GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. | | [Replete-LLM-V2.5-Qwen-3b-Q4_K_M.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-3b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-3b-Q4_K_M.gguf) | Q4_K_M | 2.10GB | false | Good quality, default size for must use cases, *recommended*. | | [Replete-LLM-V2.5-Qwen-3b-Q4_K_S.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-3b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-3b-Q4_K_S.gguf) | Q4_K_S | 2.01GB | false | Slightly lower quality with more space savings, *recommended*. | | [Replete-LLM-V2.5-Qwen-3b-Q4_0_8_8.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-3b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-3b-Q4_0_8_8.gguf) | Q4_0_8_8 | 2.00GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). | | [Replete-LLM-V2.5-Qwen-3b-Q4_0_4_8.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-3b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-3b-Q4_0_4_8.gguf) | Q4_0_4_8 | 2.00GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). | | [Replete-LLM-V2.5-Qwen-3b-Q4_0_4_4.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-3b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-3b-Q4_0_4_4.gguf) | Q4_0_4_4 | 2.00GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. | | [Replete-LLM-V2.5-Qwen-3b-Q4_0.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-3b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-3b-Q4_0.gguf) | Q4_0 | 2.00GB | false | Legacy format, generally not worth using over similarly sized formats | | [Replete-LLM-V2.5-Qwen-3b-IQ4_XS.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-3b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-3b-IQ4_XS.gguf) | IQ4_XS | 1.90GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Replete-LLM-V2.5-Qwen-3b-Q3_K_L.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-3b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-3b-Q3_K_L.gguf) | Q3_K_L | 1.84GB | false | Lower quality but usable, good for low RAM availability. | | [Replete-LLM-V2.5-Qwen-3b-IQ3_M.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-3b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-3b-IQ3_M.gguf) | IQ3_M | 1.62GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. | ## Embed/output weights Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to. Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using. Thanks! ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/Replete-LLM-V2.5-Qwen-3b-GGUF --include "Replete-LLM-V2.5-Qwen-3b-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/Replete-LLM-V2.5-Qwen-3b-GGUF --include "Replete-LLM-V2.5-Qwen-3b-Q8_0/*" --local-dir ./ ``` You can either specify a new local-dir (Replete-LLM-V2.5-Qwen-3b-Q8_0) or download them all in place (./) ## Q4_0_X_X These are *NOT* for Metal (Apple) offloading, only ARM chips. If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660) To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!). ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. ## Credits Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset Thank you ZeroWw for the inspiration to experiment with embed/output Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
bartowski/Replete-LLM-V2.5-Qwen-1.5b-GGUF
bartowski
2024-10-07T15:08:38Z
339
1
transformers
[ "transformers", "gguf", "text-generation", "base_model:rombodawg/Rombos-LLM-V2.5-Qwen-1.5b", "base_model:quantized:rombodawg/Rombos-LLM-V2.5-Qwen-1.5b", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-09-29T09:19:19Z
--- base_model: rombodawg/Rombos-LLM-V2.5-Qwen-1.5b library_name: transformers license: apache-2.0 pipeline_tag: text-generation quantized_by: bartowski --- ## Llamacpp imatrix Quantizations of Rombos-LLM-V2.5-Qwen-1.5b Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3825">b3825</a> for quantization. Original model: https://huggingface.co/rombodawg/Rombos-LLM-V2.5-Qwen-1.5b All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) Run them in [LM Studio](https://lmstudio.ai/) ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | Description | | -------- | ---------- | --------- | ----- | ----------- | | [Replete-LLM-V2.5-Qwen-1.5b-f16.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-1.5b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-1.5b-f16.gguf) | f16 | 3.56GB | false | Full F16 weights. | | [Replete-LLM-V2.5-Qwen-1.5b-Q8_0.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-1.5b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-1.5b-Q8_0.gguf) | Q8_0 | 1.89GB | false | Extremely high quality, generally unneeded but max available quant. | | [Replete-LLM-V2.5-Qwen-1.5b-Q6_K_L.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-1.5b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-1.5b-Q6_K_L.gguf) | Q6_K_L | 1.58GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. | | [Replete-LLM-V2.5-Qwen-1.5b-Q6_K.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-1.5b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-1.5b-Q6_K.gguf) | Q6_K | 1.46GB | false | Very high quality, near perfect, *recommended*. | | [Replete-LLM-V2.5-Qwen-1.5b-Q5_K_L.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-1.5b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-1.5b-Q5_K_L.gguf) | Q5_K_L | 1.43GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. | | [Replete-LLM-V2.5-Qwen-1.5b-Q5_K_M.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-1.5b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-1.5b-Q5_K_M.gguf) | Q5_K_M | 1.29GB | false | High quality, *recommended*. | | [Replete-LLM-V2.5-Qwen-1.5b-Q4_K_L.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-1.5b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-1.5b-Q4_K_L.gguf) | Q4_K_L | 1.29GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. | | [Replete-LLM-V2.5-Qwen-1.5b-Q5_K_S.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-1.5b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-1.5b-Q5_K_S.gguf) | Q5_K_S | 1.26GB | false | High quality, *recommended*. | | [Replete-LLM-V2.5-Qwen-1.5b-Q3_K_XL.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-1.5b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-1.5b-Q3_K_XL.gguf) | Q3_K_XL | 1.18GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. | | [Replete-LLM-V2.5-Qwen-1.5b-Q4_K_M.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-1.5b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-1.5b-Q4_K_M.gguf) | Q4_K_M | 1.12GB | false | Good quality, default size for must use cases, *recommended*. | | [Replete-LLM-V2.5-Qwen-1.5b-Q4_K_S.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-1.5b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-1.5b-Q4_K_S.gguf) | Q4_K_S | 1.07GB | false | Slightly lower quality with more space savings, *recommended*. | | [Replete-LLM-V2.5-Qwen-1.5b-Q4_0_8_8.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-1.5b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-1.5b-Q4_0_8_8.gguf) | Q4_0_8_8 | 1.07GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). | | [Replete-LLM-V2.5-Qwen-1.5b-Q4_0_4_8.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-1.5b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-1.5b-Q4_0_4_8.gguf) | Q4_0_4_8 | 1.07GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). | | [Replete-LLM-V2.5-Qwen-1.5b-Q4_0_4_4.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-1.5b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-1.5b-Q4_0_4_4.gguf) | Q4_0_4_4 | 1.07GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. | | [Replete-LLM-V2.5-Qwen-1.5b-Q4_0.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-1.5b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-1.5b-Q4_0.gguf) | Q4_0 | 1.07GB | false | Legacy format, generally not worth using over similarly sized formats | | [Replete-LLM-V2.5-Qwen-1.5b-IQ4_XS.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-1.5b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-1.5b-IQ4_XS.gguf) | IQ4_XS | 1.02GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Replete-LLM-V2.5-Qwen-1.5b-Q3_K_L.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-1.5b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-1.5b-Q3_K_L.gguf) | Q3_K_L | 0.98GB | false | Lower quality but usable, good for low RAM availability. | | [Replete-LLM-V2.5-Qwen-1.5b-IQ3_M.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-1.5b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-1.5b-IQ3_M.gguf) | IQ3_M | 0.88GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. | ## Embed/output weights Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to. Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using. Thanks! ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/Replete-LLM-V2.5-Qwen-1.5b-GGUF --include "Replete-LLM-V2.5-Qwen-1.5b-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/Replete-LLM-V2.5-Qwen-1.5b-GGUF --include "Replete-LLM-V2.5-Qwen-1.5b-Q8_0/*" --local-dir ./ ``` You can either specify a new local-dir (Replete-LLM-V2.5-Qwen-1.5b-Q8_0) or download them all in place (./) ## Q4_0_X_X These are *NOT* for Metal (Apple) offloading, only ARM chips. If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660) To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!). ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. ## Credits Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset Thank you ZeroWw for the inspiration to experiment with embed/output Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
NeuroDonu/Llama-3-8b-only-ru-r
NeuroDonu
2024-10-07T15:02:06Z
23
1
null
[ "gguf", "ru", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-07-25T13:22:37Z
--- license: apache-2.0 language: - ru --- Удивительно, но в теории эта модель знает лишь один символ - Р.
mohitpg/dqn-SpaceInvadersNoFrameskip-v4_2
mohitpg
2024-10-07T14:55:09Z
5
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-10-07T14:50:25Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 257.00 +/- 38.81 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mohitpg -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mohitpg -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mohitpg ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
second-state/FinGPT-MT-Llama-3-8B-LoRA-GGUF
second-state
2024-10-07T14:50:58Z
4,450
2
null
[ "gguf", "text-generation", "en", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:quantized:meta-llama/Meta-Llama-3-8B", "license:other", "region:us" ]
text-generation
2024-10-07T14:13:18Z
--- language: - en license: other license_name: llama3 model_name: Llama3 8B arxiv: 2307.09288 base_model: meta-llama/Meta-Llama-3-8B inference: false model_creator: Meta Llama3 model_type: llama pipeline_tag: text-generation quantized_by: Second State Inc. --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # FinGPT-MT-Llama-3-8B-LoRA-GGUF ## Original Model [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) ## LoRA Adapter [FinGPT/fingpt-mt_llama3-8b_lora](https://huggingface.co/FinGPT/fingpt-mt_llama3-8b_lora) ## Run with LlamaEdge - LlamaEdge version: coming soon <!-- - LlamaEdge version: [v0.8.3](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.8.3) and above - Prompt template - Prompt type: `llama-3-chat` - Prompt string ```text <|begin_of_text|><|start_header_id|>system<|end_header_id|> {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` --> - Context size: `8192` <!-- - Run as LlamaEdge service ```bash wasmedge --dir .:. --nn-preload default:GGML:AUTO:FinGPT-MT-Llama-3-8B-LoRA-Q5_K_M.gguf \ llama-api-server.wasm \ --prompt-template llama-3-chat \ --ctx-size 8192 \ --model-name Llama-3-8b ``` - Run as LlamaEdge command app ```bash wasmedge --dir .:. --nn-preload default:GGML:AUTO:FinGPT-MT-Llama-3-8B-LoRA-Q5_K_M.gguf \ llama-chat.wasm \ --prompt-template llama-3-chat \ --ctx-size 8192 \ ``` --> ## Quantized GGUF Models | Name | Quant method | Bits | Size | Use case | | ---- | ---- | ---- | ---- | ----- | | [FinGPT-MT-Llama-3-8B-LoRA-Q2_K.gguf](https://huggingface.co/second-state/FinGPT-MT-Llama-3-8B-LoRA-GGUF/blob/main/FinGPT-MT-Llama-3-8B-LoRA-Q2_K.gguf) | Q2_K | 2 | 3.18 GB| smallest, significant quality loss - not recommended for most purposes | | [FinGPT-MT-Llama-3-8B-LoRA-Q3_K_L.gguf](https://huggingface.co/second-state/FinGPT-MT-Llama-3-8B-LoRA-GGUF/blob/main/FinGPT-MT-Llama-3-8B-LoRA-Q3_K_L.gguf) | Q3_K_L | 3 | 4.32 GB| small, substantial quality loss | | [FinGPT-MT-Llama-3-8B-LoRA-Q3_K_M.gguf](https://huggingface.co/second-state/FinGPT-MT-Llama-3-8B-LoRA-GGUF/blob/main/FinGPT-MT-Llama-3-8B-LoRA-Q3_K_M.gguf) | Q3_K_M | 3 | 4.02 GB| very small, high quality loss | | [FinGPT-MT-Llama-3-8B-LoRA-Q3_K_S.gguf](https://huggingface.co/second-state/FinGPT-MT-Llama-3-8B-LoRA-GGUF/blob/main/FinGPT-MT-Llama-3-8B-LoRA-Q3_K_S.gguf) | Q3_K_S | 3 | 3.66 GB| very small, high quality loss | | [FinGPT-MT-Llama-3-8B-LoRA-Q4_0.gguf](https://huggingface.co/second-state/FinGPT-MT-Llama-3-8B-LoRA-GGUF/blob/main/FinGPT-MT-Llama-3-8B-LoRA-Q4_0.gguf) | Q4_0 | 4 | 4.66 GB| legacy; small, very high quality loss - prefer using Q3_K_M | | [FinGPT-MT-Llama-3-8B-LoRA-Q4_K_M.gguf](https://huggingface.co/second-state/FinGPT-MT-Llama-3-8B-LoRA-GGUF/blob/main/FinGPT-MT-Llama-3-8B-LoRA-Q4_K_M.gguf) | Q4_K_M | 4 | 4.92 GB| medium, balanced quality - recommended | | [FinGPT-MT-Llama-3-8B-LoRA-Q4_K_S.gguf](https://huggingface.co/second-state/FinGPT-MT-Llama-3-8B-LoRA-GGUF/blob/main/FinGPT-MT-Llama-3-8B-LoRA-Q4_K_S.gguf) | Q4_K_S | 4 | 4.69 GB| small, greater quality loss | | [FinGPT-MT-Llama-3-8B-LoRA-Q5_0.gguf](https://huggingface.co/second-state/FinGPT-MT-Llama-3-8B-LoRA-GGUF/blob/main/FinGPT-MT-Llama-3-8B-LoRA-Q5_0.gguf) | Q5_0 | 5 | 5.6 GB| legacy; medium, balanced quality - prefer using Q4_K_M | | [FinGPT-MT-Llama-3-8B-LoRA-Q5_K_M.gguf](https://huggingface.co/second-state/FinGPT-MT-Llama-3-8B-LoRA-GGUF/blob/main/FinGPT-MT-Llama-3-8B-LoRA-Q5_K_M.gguf) | Q5_K_M | 5 | 5.73 GB| large, very low quality loss - recommended | | [FinGPT-MT-Llama-3-8B-LoRA-Q5_K_S.gguf](https://huggingface.co/second-state/FinGPT-MT-Llama-3-8B-LoRA-GGUF/blob/main/FinGPT-MT-Llama-3-8B-LoRA-Q5_K_S.gguf) | Q5_K_S | 5 | 5.6 GB| large, low quality loss - recommended | | [FinGPT-MT-Llama-3-8B-LoRA-Q6_K.gguf](https://huggingface.co/second-state/FinGPT-MT-Llama-3-8B-LoRA-GGUF/blob/main/FinGPT-MT-Llama-3-8B-LoRA-Q6_K.gguf) | Q6_K | 6 | 6.6 GB| very large, extremely low quality loss | | [FinGPT-MT-Llama-3-8B-LoRA-Q8_0.gguf](https://huggingface.co/second-state/FinGPT-MT-Llama-3-8B-LoRA-GGUF/blob/main/FinGPT-MT-Llama-3-8B-LoRA-Q8_0.gguf) | Q8_0 | 8 | 8.54 GB| very large, extremely low quality loss - not recommended | | [FinGPT-MT-Llama-3-8B-LoRA-f16.gguf](https://huggingface.co/second-state/FinGPT-MT-Llama-3-8B-LoRA-GGUF/blob/main/FinGPT-MT-Llama-3-8B-LoRA-f16.gguf) | f16 | 16 | 16.1 GB| | *Quantized with llama.cpp b3807.*
riacheruvu/phi-2-openvino
riacheruvu
2024-10-07T14:47:53Z
7
0
null
[ "safetensors", "openvino", "phi", "nlp", "code", "openvino-export", "text-generation", "en", "base_model:microsoft/phi-2", "base_model:finetune:microsoft/phi-2", "license:mit", "region:us" ]
text-generation
2024-10-07T14:47:24Z
--- base_model: microsoft/phi-2 language: - en license: mit license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE pipeline_tag: text-generation tags: - nlp - code - openvino - openvino-export --- This model was converted to OpenVINO from [`microsoft/phi-2`](https://huggingface.co/microsoft/phi-2) using [optimum-intel](https://github.com/huggingface/optimum-intel) via the [export](https://huggingface.co/spaces/echarlaix/openvino-export) space. First make sure you have optimum-intel installed: ```bash pip install optimum[openvino] ``` To load your model you can do as follows: ```python from optimum.intel import OVModelForCausalLM model_id = "riacheruvu/phi-2-openvino" model = OVModelForCausalLM.from_pretrained(model_id) ```
AuriAetherwiing/MN-12B-Starcannon-v2
AuriAetherwiing
2024-10-07T14:43:02Z
2,977
24
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2306.01708", "base_model:intervitens/mini-magnum-12b-v1.1", "base_model:merge:intervitens/mini-magnum-12b-v1.1", "base_model:nothingiisreal/MN-12B-Celeste-V1.9", "base_model:merge:nothingiisreal/MN-12B-Celeste-V1.9", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-08-01T13:14:33Z
--- base_model: - nothingiisreal/MN-12B-Celeste-V1.9 - intervitens/mini-magnum-12b-v1.1 library_name: transformers tags: - mergekit - merge license: cc-by-nc-nd-4.0 --- **UPD: this model series is succeeded by [EVA](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-14B-v0.1)**<br> **Unprivated, to store for historical reasons** <br> *There's not much point in those merges, Celeste 70B 0.1 pretty much melded Celeste's and Magnum's datasets anyway*<br> *To be continued, but on a different base, under a different name, and actually trained this time, without shortcuts*<br><br> # MN-12B-Starcannon-v2 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). Turned out to be a bit more Magnum-esque, but still is very creative, and writing style is pretty nice, even if some slop words appear time to time. Might be a good fit for people wanting more variety than Magnum has, and more verbose prose than Celeste v1.9 has. <br><br> [Dynamic FP8](https://huggingface.co/aetherwiing/MN-12B-Starcannon-v2-fp8-dynamic) <br> [Static GGUF (by Mradermacher)](https://huggingface.co/mradermacher/MN-12B-Starcannon-v2-GGUF) <br> [EXL2 (by kingbri of RoyalLab)](https://huggingface.co/royallab/MN-12B-Starcannon-v2-exl2) ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [nothingiisreal/MN-12B-Celeste-V1.9](https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9) as a base. ### Merge fodder The following models were included in the merge: * [nothingiisreal/MN-12B-Celeste-V1.9](https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9) * [intervitens/mini-magnum-12b-v1.1](https://huggingface.co/intervitens/mini-magnum-12b-v1.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: intervitens/mini-magnum-12b-v1.1 parameters: density: 0.3 weight: 0.5 - model: nothingiisreal/MN-12B-Celeste-V1.9 parameters: density: 0.7 weight: 0.5 merge_method: ties base_model: nothingiisreal/MN-12B-Celeste-V1.9 parameters: normalize: true int8_mask: true dtype: bfloat16 ```
alamgirqazi/mamba_text_classification
alamgirqazi
2024-10-07T14:42:21Z
75
0
transformers
[ "transformers", "pytorch", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2024-10-06T13:40:29Z
--- library_name: transformers tags: - generated_from_trainer metrics: - accuracy model-index: - name: mamba_text_classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mamba_text_classification This model was trained from scratch on IMDB dataset. It achieves the following results on the evaluation set: - Loss: 0.2454 - Accuracy: 0.932 ## Installation ``` conda create -n mamba_env python=3.10 conda activate mamba_env pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu122 pip install 'causal-conv1d>=1.4.0' pip install mamba-ssm==2.2.1 pip install datasets scikit-learn ``` ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0109 | 0.1 | 625 | 0.3104 | 0.896 | | 2.5826 | 0.2 | 1250 | 0.2714 | 0.912 | | 0.0115 | 0.3 | 1875 | 0.2622 | 0.928 | | 0.0133 | 0.4 | 2500 | 0.2427 | 0.924 | | 0.0012 | 0.5 | 3125 | 0.2205 | 0.916 | | 2.1868 | 0.6 | 3750 | 0.2280 | 0.936 | | 1.5594 | 0.7 | 4375 | 0.2529 | 0.932 | | 0.0016 | 0.8 | 5000 | 0.2196 | 0.94 | | 0.0055 | 0.9 | 5625 | 0.2365 | 0.932 | | 0.3351 | 1.0 | 6250 | 0.2454 | 0.932 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.0
RichardErkhov/HuggingFaceTB_-_SmolLM-360M-gguf
RichardErkhov
2024-10-07T14:40:27Z
20
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2024-10-07T14:34:44Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) SmolLM-360M - GGUF - Model creator: https://huggingface.co/HuggingFaceTB/ - Original model: https://huggingface.co/HuggingFaceTB/SmolLM-360M/ | Name | Quant method | Size | | ---- | ---- | ---- | | [SmolLM-360M.Q2_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-gguf/blob/main/SmolLM-360M.Q2_K.gguf) | Q2_K | 0.2GB | | [SmolLM-360M.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-gguf/blob/main/SmolLM-360M.IQ3_XS.gguf) | IQ3_XS | 0.2GB | | [SmolLM-360M.IQ3_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-gguf/blob/main/SmolLM-360M.IQ3_S.gguf) | IQ3_S | 0.2GB | | [SmolLM-360M.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-gguf/blob/main/SmolLM-360M.Q3_K_S.gguf) | Q3_K_S | 0.2GB | | [SmolLM-360M.IQ3_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-gguf/blob/main/SmolLM-360M.IQ3_M.gguf) | IQ3_M | 0.21GB | | [SmolLM-360M.Q3_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-gguf/blob/main/SmolLM-360M.Q3_K.gguf) | Q3_K | 0.22GB | | [SmolLM-360M.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-gguf/blob/main/SmolLM-360M.Q3_K_M.gguf) | Q3_K_M | 0.22GB | | [SmolLM-360M.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-gguf/blob/main/SmolLM-360M.Q3_K_L.gguf) | Q3_K_L | 0.23GB | | [SmolLM-360M.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-gguf/blob/main/SmolLM-360M.IQ4_XS.gguf) | IQ4_XS | 0.21GB | | [SmolLM-360M.Q4_0.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-gguf/blob/main/SmolLM-360M.Q4_0.gguf) | Q4_0 | 0.21GB | | [SmolLM-360M.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-gguf/blob/main/SmolLM-360M.IQ4_NL.gguf) | IQ4_NL | 0.21GB | | [SmolLM-360M.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-gguf/blob/main/SmolLM-360M.Q4_K_S.gguf) | Q4_K_S | 0.24GB | | [SmolLM-360M.Q4_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-gguf/blob/main/SmolLM-360M.Q4_K.gguf) | Q4_K | 0.25GB | | [SmolLM-360M.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-gguf/blob/main/SmolLM-360M.Q4_K_M.gguf) | Q4_K_M | 0.25GB | | [SmolLM-360M.Q4_1.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-gguf/blob/main/SmolLM-360M.Q4_1.gguf) | Q4_1 | 0.23GB | | [SmolLM-360M.Q5_0.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-gguf/blob/main/SmolLM-360M.Q5_0.gguf) | Q5_0 | 0.25GB | | [SmolLM-360M.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-gguf/blob/main/SmolLM-360M.Q5_K_S.gguf) | Q5_K_S | 0.26GB | | [SmolLM-360M.Q5_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-gguf/blob/main/SmolLM-360M.Q5_K.gguf) | Q5_K | 0.27GB | | [SmolLM-360M.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-gguf/blob/main/SmolLM-360M.Q5_K_M.gguf) | Q5_K_M | 0.27GB | | [SmolLM-360M.Q5_1.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-gguf/blob/main/SmolLM-360M.Q5_1.gguf) | Q5_1 | 0.27GB | | [SmolLM-360M.Q6_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-gguf/blob/main/SmolLM-360M.Q6_K.gguf) | Q6_K | 0.34GB | | [SmolLM-360M.Q8_0.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-gguf/blob/main/SmolLM-360M.Q8_0.gguf) | Q8_0 | 0.36GB | Original model description: --- library_name: transformers license: apache-2.0 language: - en datasets: - HuggingFaceTB/smollm-corpus --- # SmolLM <center> <img src="https://huggingface.co/datasets/HuggingFaceTB/images/resolve/main/banner_smol.png" alt="SmolLM" width="1100" height="600"> </center> ## Table of Contents 1. [Model Summary](##model-summary) 2. [Limitations](##limitations) 3. [Training](##training) 4. [License](##license) 5. [Citation](##citation) ## Model Summary SmolLM is a series of state-of-the-art small language models available in three sizes: 135M, 360M, and 1.7B parameters. These models are built on Cosmo-Corpus, a meticulously curated high-quality training dataset. Cosmo-Corpus includes Cosmopedia v2 (28B tokens of synthetic textbooks and stories generated by Mixtral), Python-Edu (4B tokens of educational Python samples from The Stack), and FineWeb-Edu (220B tokens of deduplicated educational web samples from FineWeb). SmolLM models have shown promising results when compared to other models in their size categories across various benchmarks testing common sense reasoning and world knowledge. For detailed information on training, benchmarks and performance, please refer to our full [blog post](https://huggingface.co/blog/smollm). This is the SmolLM-360M ### Generation ```bash pip install transformers ``` #### Running the model on CPU/GPU/multi GPU * _Using full precision_ ```python # pip install transformers from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "HuggingFaceTB/SmolLM-360M" device = "cuda" # for GPU usage or "cpu" for CPU usage tokenizer = AutoTokenizer.from_pretrained(checkpoint) # for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")` model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device) inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device) outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) * _Using `torch.bfloat16`_ ```python # pip install accelerate import torch from transformers import AutoTokenizer, AutoModelForCausalLM checkpoint = "HuggingFaceTB/SmolLM-360M" tokenizer = AutoTokenizer.from_pretrained(checkpoint) # for fp16 use `torch_dtype=torch.float16` instead model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", torch_dtype=torch.bfloat16) inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` ```bash >>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB") Memory footprint: 723.56 MB ``` #### Quantized Versions through `bitsandbytes` * _Using 8-bit precision (int8)_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig # to use 4bit use `load_in_4bit=True` instead quantization_config = BitsAndBytesConfig(load_in_8bit=True) checkpoint = "HuggingFaceTB/SmolLM-360M" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint, quantization_config=quantization_config) inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` ```bash >>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB") # load_in_8bit Memory footprint: 409.07 MB # load_in_4bit >>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB") Memory footprint: 251.79 MB ``` # Limitations While SmolLM models have been trained on a diverse dataset including educational content and synthetic texts, they have limitations. The models primarily understand and generate content in English. They can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content. For a more comprehensive discussion of the models' capabilities and limitations, please refer to our full [blog post](https://huggingface.co/blog/smollm).. This repository contains a converted version of our latest trained model. We've noticed a small performance difference between this converted checkpoint (transformers) and the original (nanotron). We're currently working to resolve this issue. # Training ## Model - **Architecture:** For architecture detail, see the [blog post](https://huggingface.co/blog/smollm). - **Pretraining steps:** 600k - **Pretraining tokens:** 600B - **Precision:** bfloat16 - **Tokenizer:** [HuggingFaceTB/cosmo2-tokenizer](https://huggingface.co/HuggingFaceTB/cosmo2-tokenizer) ## Hardware - **GPUs:** 64 H100 ## Software - **Training Framework:** [Nanotron](https://github.com/huggingface/nanotron/tree/main) # License [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) # Citation ```bash @misc{allal2024SmolLM, title={SmolLM - blazingly fast and remarkably powerful}, author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Leandro von Werra and Thomas Wolf}, year={2024}, } ```
EVA787797/777787777
EVA787797
2024-10-07T14:40:03Z
18
1
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:afl-3.0", "region:us" ]
text-to-image
2024-10-07T08:54:17Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/machi-marclilio587877-tcogmyv9qdcxfbwumelg6dqe.png - text: '-' output: url: images/koleodkfnj3urqdjuavb.png - text: '-' output: url: images/femme-fatale-retouchee.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: null license: afl-3.0 --- # flux_devpro_ckpt <Gallery /> ## Download model Weights for this model are available in PyTorch,Safetensors format. [Download](/EVA787797/777787777/tree/main) them in the Files & versions tab.
Helsinki-NLP/opus-mt-tc-bible-big-bat-deu_eng_fra_por_spa
Helsinki-NLP
2024-10-07T14:28:16Z
114
0
transformers
[ "transformers", "pytorch", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc-bible", "de", "en", "es", "fr", "lt", "lv", "prg", "pt", "sgs", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-10-07T14:28:02Z
--- library_name: transformers language: - de - en - es - fr - lt - lv - prg - pt - sgs tags: - translation - opus-mt-tc-bible license: apache-2.0 model-index: - name: opus-mt-tc-bible-big-bat-deu_eng_fra_por_spa results: - task: name: Translation lit-deu type: translation args: lit-deu dataset: name: flores200-devtest type: flores200-devtest args: lit-deu metrics: - name: BLEU type: bleu value: 23.7 - name: chr-F type: chrf value: 0.53223 - task: name: Translation lit-eng type: translation args: lit-eng dataset: name: flores200-devtest type: flores200-devtest args: lit-eng metrics: - name: BLEU type: bleu value: 32.6 - name: chr-F type: chrf value: 0.59361 - task: name: Translation lit-fra type: translation args: lit-fra dataset: name: flores200-devtest type: flores200-devtest args: lit-fra metrics: - name: BLEU type: bleu value: 30.0 - name: chr-F type: chrf value: 0.56786 - task: name: Translation lit-por type: translation args: lit-por dataset: name: flores200-devtest type: flores200-devtest args: lit-por metrics: - name: BLEU type: bleu value: 28.2 - name: chr-F type: chrf value: 0.55393 - task: name: Translation lit-spa type: translation args: lit-spa dataset: name: flores200-devtest type: flores200-devtest args: lit-spa metrics: - name: BLEU type: bleu value: 20.9 - name: chr-F type: chrf value: 0.49041 - task: name: Translation lav-deu type: translation args: lav-deu dataset: name: flores101-devtest type: flores_101 args: lav deu devtest metrics: - name: BLEU type: bleu value: 23.8 - name: chr-F type: chrf value: 0.54001 - task: name: Translation lav-fra type: translation args: lav-fra dataset: name: flores101-devtest type: flores_101 args: lav fra devtest metrics: - name: BLEU type: bleu value: 29.4 - name: chr-F type: chrf value: 0.57002 - task: name: Translation lav-por type: translation args: lav-por dataset: name: flores101-devtest type: flores_101 args: lav por devtest metrics: - name: BLEU type: bleu value: 26.7 - name: chr-F type: chrf value: 0.55155 - task: name: Translation lav-spa type: translation args: lav-spa dataset: name: flores101-devtest type: flores_101 args: lav spa devtest metrics: - name: BLEU type: bleu value: 20.8 - name: chr-F type: chrf value: 0.49259 - task: name: Translation lit-eng type: translation args: lit-eng dataset: name: flores101-devtest type: flores_101 args: lit eng devtest metrics: - name: BLEU type: bleu value: 32.1 - name: chr-F type: chrf value: 0.59073 - task: name: Translation lit-por type: translation args: lit-por dataset: name: flores101-devtest type: flores_101 args: lit por devtest metrics: - name: BLEU type: bleu value: 27.8 - name: chr-F type: chrf value: 0.55106 - task: name: Translation lav-deu type: translation args: lav-deu dataset: name: ntrex128 type: ntrex128 args: lav-deu metrics: - name: BLEU type: bleu value: 18.5 - name: chr-F type: chrf value: 0.47317 - task: name: Translation lav-eng type: translation args: lav-eng dataset: name: ntrex128 type: ntrex128 args: lav-eng metrics: - name: BLEU type: bleu value: 19.7 - name: chr-F type: chrf value: 0.53734 - task: name: Translation lav-fra type: translation args: lav-fra dataset: name: ntrex128 type: ntrex128 args: lav-fra metrics: - name: BLEU type: bleu value: 19.6 - name: chr-F type: chrf value: 0.47843 - task: name: Translation lav-por type: translation args: lav-por dataset: name: ntrex128 type: ntrex128 args: lav-por metrics: - name: BLEU type: bleu value: 19.3 - name: chr-F type: chrf value: 0.47027 - task: name: Translation lav-spa type: translation args: lav-spa dataset: name: ntrex128 type: ntrex128 args: lav-spa metrics: - name: BLEU type: bleu value: 22.7 - name: chr-F type: chrf value: 0.49428 - task: name: Translation lit-deu type: translation args: lit-deu dataset: name: ntrex128 type: ntrex128 args: lit-deu metrics: - name: BLEU type: bleu value: 19.4 - name: chr-F type: chrf value: 0.50279 - task: name: Translation lit-eng type: translation args: lit-eng dataset: name: ntrex128 type: ntrex128 args: lit-eng metrics: - name: BLEU type: bleu value: 28.1 - name: chr-F type: chrf value: 0.56642 - task: name: Translation lit-fra type: translation args: lit-fra dataset: name: ntrex128 type: ntrex128 args: lit-fra metrics: - name: BLEU type: bleu value: 22.6 - name: chr-F type: chrf value: 0.51276 - task: name: Translation lit-por type: translation args: lit-por dataset: name: ntrex128 type: ntrex128 args: lit-por metrics: - name: BLEU type: bleu value: 22.6 - name: chr-F type: chrf value: 0.50864 - task: name: Translation lit-spa type: translation args: lit-spa dataset: name: ntrex128 type: ntrex128 args: lit-spa metrics: - name: BLEU type: bleu value: 25.9 - name: chr-F type: chrf value: 0.53105 - task: name: Translation lav-eng type: translation args: lav-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: lav-eng metrics: - name: BLEU type: bleu value: 21.5 - name: chr-F type: chrf value: 0.63015 - task: name: Translation lit-deu type: translation args: lit-deu dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: lit-deu metrics: - name: BLEU type: bleu value: 47.5 - name: chr-F type: chrf value: 0.66527 - task: name: Translation lit-eng type: translation args: lit-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: lit-eng metrics: - name: BLEU type: bleu value: 58.9 - name: chr-F type: chrf value: 0.72975 - task: name: Translation lit-spa type: translation args: lit-spa dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: lit-spa metrics: - name: BLEU type: bleu value: 49.9 - name: chr-F type: chrf value: 0.67956 - task: name: Translation multi-multi type: translation args: multi-multi dataset: name: tatoeba-test-v2020-07-28-v2023-09-26 type: tatoeba_mt args: multi-multi metrics: - name: BLEU type: bleu value: 55.5 - name: chr-F type: chrf value: 0.71003 - task: name: Translation lav-eng type: translation args: lav-eng dataset: name: newstest2017 type: wmt-2017-news args: lav-eng metrics: - name: BLEU type: bleu value: 22.0 - name: chr-F type: chrf value: 0.49729 - task: name: Translation lit-eng type: translation args: lit-eng dataset: name: newstest2019 type: wmt-2019-news args: lit-eng metrics: - name: BLEU type: bleu value: 31.2 - name: chr-F type: chrf value: 0.59971 --- # opus-mt-tc-bible-big-bat-deu_eng_fra_por_spa ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [Acknowledgements](#acknowledgements) ## Model Details Neural machine translation model for translating from Baltic languages (bat) to unknown (deu+eng+fra+por+spa). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2024-05-30 - **License:** Apache-2.0 - **Language(s):** - Source Language(s): lav lit prg sgs - Target Language(s): deu eng fra por spa - Valid Target Language Labels: >>deu<< >>eng<< >>fra<< >>por<< >>spa<< >>xxx<< - **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip) - **Resources for more information:** - [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/bat-deu%2Beng%2Bfra%2Bpor%2Bspa/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-30) - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/) - [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1) - [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/) This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>deu<<` ## Uses This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>deu<< Replace this with text in an accepted source language.", ">>spa<< This is the second sentence." ] model_name = "pytorch-models/opus-mt-tc-bible-big-bat-deu_eng_fra_por_spa" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-bat-deu_eng_fra_por_spa") print(pipe(">>deu<< Replace this with text in an accepted source language.")) ``` ## Training - **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Evaluation * [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/bat-deu%2Beng%2Bfra%2Bpor%2Bspa/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-30) * test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt) * test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | lav-eng | tatoeba-test-v2021-08-07 | 0.63015 | 21.5 | 1631 | 11213 | | lit-deu | tatoeba-test-v2021-08-07 | 0.66527 | 47.5 | 1115 | 8531 | | lit-eng | tatoeba-test-v2021-08-07 | 0.72975 | 58.9 | 2528 | 17855 | | lit-spa | tatoeba-test-v2021-08-07 | 0.67956 | 49.9 | 454 | 2751 | | lav-deu | flores101-devtest | 0.54001 | 23.8 | 1012 | 25094 | | lav-fra | flores101-devtest | 0.57002 | 29.4 | 1012 | 28343 | | lav-por | flores101-devtest | 0.55155 | 26.7 | 1012 | 26519 | | lav-spa | flores101-devtest | 0.49259 | 20.8 | 1012 | 29199 | | lit-eng | flores101-devtest | 0.59073 | 32.1 | 1012 | 24721 | | lit-por | flores101-devtest | 0.55106 | 27.8 | 1012 | 26519 | | lit-deu | flores200-devtest | 0.53223 | 23.7 | 1012 | 25094 | | lit-eng | flores200-devtest | 0.59361 | 32.6 | 1012 | 24721 | | lit-fra | flores200-devtest | 0.56786 | 30.0 | 1012 | 28343 | | lit-por | flores200-devtest | 0.55393 | 28.2 | 1012 | 26519 | | lit-spa | flores200-devtest | 0.49041 | 20.9 | 1012 | 29199 | | lav-eng | newstest2017 | 0.49729 | 22.0 | 2001 | 47511 | | lit-eng | newstest2019 | 0.59971 | 31.2 | 1000 | 25878 | | lav-deu | ntrex128 | 0.47317 | 18.5 | 1997 | 48761 | | lav-eng | ntrex128 | 0.53734 | 19.7 | 1997 | 47673 | | lav-fra | ntrex128 | 0.47843 | 19.6 | 1997 | 53481 | | lav-por | ntrex128 | 0.47027 | 19.3 | 1997 | 51631 | | lav-spa | ntrex128 | 0.49428 | 22.7 | 1997 | 54107 | | lit-deu | ntrex128 | 0.50279 | 19.4 | 1997 | 48761 | | lit-eng | ntrex128 | 0.56642 | 28.1 | 1997 | 47673 | | lit-fra | ntrex128 | 0.51276 | 22.6 | 1997 | 53481 | | lit-por | ntrex128 | 0.50864 | 22.6 | 1997 | 51631 | | lit-spa | ntrex128 | 0.53105 | 25.9 | 1997 | 54107 | ## Citation Information * Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ```bibtex @article{tiedemann2023democratizing, title={Democratizing neural machine translation with {OPUS-MT}}, author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami}, journal={Language Resources and Evaluation}, number={58}, pages={713--755}, year={2023}, publisher={Springer Nature}, issn={1574-0218}, doi={10.1007/s10579-023-09704-w} } @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Acknowledgements The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/). ## Model conversion info * transformers version: 4.45.1 * OPUS-MT git hash: a0ea3b3 * port time: Mon Oct 7 17:27:51 EEST 2024 * port machine: LM0-400-22516.local
Trelis/Llama-3.2-1B-Instruct-MATH-synthetic
Trelis
2024-10-07T14:16:45Z
116
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/Llama-3.2-1B-Instruct", "base_model:finetune:unsloth/Llama-3.2-1B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-10-07T14:16:17Z
--- base_model: unsloth/Llama-3.2-1B-Instruct language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** Trelis - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Helsinki-NLP/opus-mt-tc-bible-big-alv-deu_eng_fra_por_spa
Helsinki-NLP
2024-10-07T14:13:38Z
105
0
transformers
[ "transformers", "pytorch", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc-bible", "abi", "acd", "ade", "adj", "ak", "akp", "ann", "anv", "atg", "avn", "bas", "bav", "bba", "beh", "bem", "bfd", "bfo", "bim", "biv", "bkv", "blh", "bmq", "bmv", "bom", "bov", "box", "bqj", "bss", "btt", "bud", "bwu", "cce", "cjk", "cko", "cme", "csk", "cwe", "cwt", "dag", "de", "dga", "dgi", "dig", "dop", "dug", "dyi", "dyo", "ee", "efi", "en", "es", "fal", "ff", "fon", "fr", "gej", "gkn", "gng", "gog", "gud", "gur", "guw", "gux", "gwr", "hag", "hay", "heh", "hz", "ife", "ig", "iri", "izr", "jbu", "jmc", "kam", "kbp", "kdc", "kdl", "kdn", "ken", "keu", "kez", "kg", "ki", "kia", "kj", "kki", "kkj", "kma", "kmb", "ksb", "ktj", "kub", "kus", "kyf", "las", "lee", "lef", "lem", "lg", "lia", "lip", "ln", "lob", "lon", "lua", "luy", "maw", "mcp", "mcu", "mda", "mfq", "mgo", "mnf", "mnh", "mor", "mos", "muh", "myk", "myx", "mzk", "mzm", "mzw", "ncu", "nd", "ndz", "nfr", "ng", "nhu", "nim", "nin", "nmz", "nnb", "nnh", "nnw", "nr", "nso", "ntm", "ntr", "nuj", "nwb", "ny", "nyf", "nyn", "nyo", "nyy", "nzi", "oku", "old", "ozm", "pai", "pbl", "pkb", "pt", "rim", "rn", "rw", "seh", "sg", "sig", "sil", "sld", "sn", "snw", "soy", "spp", "ss", "st", "suk", "sw", "sxb", "tbz", "tem", "thk", "tik", "tlj", "tn", "toh", "toi", "tpm", "ts", "tsw", "tum", "tw", "umb", "vag", "ve", "vmw", "vun", "wmw", "wo", "wob", "xh", "xog", "xon", "xrb", "xsm", "xuo", "yam", "yaz", "yo", "zu", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-10-07T13:22:36Z
--- library_name: transformers language: - abi - acd - ade - adj - ak - akp - ann - anv - atg - avn - bas - bav - bba - beh - bem - bfd - bfo - bim - biv - bkv - blh - bmq - bmv - bom - bov - box - bqj - bss - btt - bud - bwu - cce - cjk - cko - cme - csk - cwe - cwt - dag - de - dga - dgi - dig - dop - dug - dyi - dyo - ee - efi - en - es - fal - ff - fon - fr - gej - gkn - gng - gog - gud - gur - guw - gux - gwr - hag - hay - heh - hz - ife - ig - iri - izr - jbu - jmc - kam - kbp - kdc - kdl - kdn - ken - keu - kez - kg - ki - kia - kj - kki - kkj - kma - kmb - ksb - ktj - kub - kus - kyf - las - lee - lef - lem - lg - lia - lip - ln - lob - lon - lua - luy - maw - mcp - mcu - mda - mfq - mgo - mnf - mnh - mor - mos - muh - myk - myx - mzk - mzm - mzw - ncu - nd - ndz - nfr - ng - nhu - nim - nin - nmz - nnb - nnh - nnw - nr - nso - ntm - ntr - nuj - nwb - ny - nyf - nyn - nyo - nyy - nzi - oku - old - ozm - pai - pbl - pkb - pt - rim - rn - rw - seh - sg - sig - sil - sld - sn - snw - soy - spp - ss - st - suk - sw - sxb - tbz - tem - thk - tik - tlj - tn - toh - toi - tpm - ts - tsw - tum - tw - umb - vag - ve - vmw - vun - wmw - wo - wob - xh - xog - xon - xrb - xsm - xuo - yam - yaz - yo - zu tags: - translation - opus-mt-tc-bible license: apache-2.0 model-index: - name: opus-mt-tc-bible-big-alv-deu_eng_fra_por_spa results: - task: name: Translation bem-eng type: translation args: bem-eng dataset: name: flores200-devtest type: flores200-devtest args: bem-eng metrics: - name: BLEU type: bleu value: 13.1 - name: chr-F type: chrf value: 0.37071 - task: name: Translation ibo-eng type: translation args: ibo-eng dataset: name: flores200-devtest type: flores200-devtest args: ibo-eng metrics: - name: BLEU type: bleu value: 14.6 - name: chr-F type: chrf value: 0.38994 - task: name: Translation kin-eng type: translation args: kin-eng dataset: name: flores200-devtest type: flores200-devtest args: kin-eng metrics: - name: BLEU type: bleu value: 18.1 - name: chr-F type: chrf value: 0.41964 - task: name: Translation kin-fra type: translation args: kin-fra dataset: name: flores200-devtest type: flores200-devtest args: kin-fra metrics: - name: BLEU type: bleu value: 10.7 - name: chr-F type: chrf value: 0.34887 - task: name: Translation kon-eng type: translation args: kon-eng dataset: name: flores200-devtest type: flores200-devtest args: kon-eng metrics: - name: BLEU type: bleu value: 11.3 - name: chr-F type: chrf value: 0.34262 - task: name: Translation lin-eng type: translation args: lin-eng dataset: name: flores200-devtest type: flores200-devtest args: lin-eng metrics: - name: BLEU type: bleu value: 14.2 - name: chr-F type: chrf value: 0.37728 - task: name: Translation lin-fra type: translation args: lin-fra dataset: name: flores200-devtest type: flores200-devtest args: lin-fra metrics: - name: BLEU type: bleu value: 11.0 - name: chr-F type: chrf value: 0.35052 - task: name: Translation lug-eng type: translation args: lug-eng dataset: name: flores200-devtest type: flores200-devtest args: lug-eng metrics: - name: BLEU type: bleu value: 10.6 - name: chr-F type: chrf value: 0.31805 - task: name: Translation nso-eng type: translation args: nso-eng dataset: name: flores200-devtest type: flores200-devtest args: nso-eng metrics: - name: BLEU type: bleu value: 22.3 - name: chr-F type: chrf value: 0.45662 - task: name: Translation nso-fra type: translation args: nso-fra dataset: name: flores200-devtest type: flores200-devtest args: nso-fra metrics: - name: BLEU type: bleu value: 10.5 - name: chr-F type: chrf value: 0.33732 - task: name: Translation nya-eng type: translation args: nya-eng dataset: name: flores200-devtest type: flores200-devtest args: nya-eng metrics: - name: BLEU type: bleu value: 15.8 - name: chr-F type: chrf value: 0.39887 - task: name: Translation run-eng type: translation args: run-eng dataset: name: flores200-devtest type: flores200-devtest args: run-eng metrics: - name: BLEU type: bleu value: 15.5 - name: chr-F type: chrf value: 0.39846 - task: name: Translation run-fra type: translation args: run-fra dataset: name: flores200-devtest type: flores200-devtest args: run-fra metrics: - name: BLEU type: bleu value: 10.8 - name: chr-F type: chrf value: 0.34845 - task: name: Translation run-por type: translation args: run-por dataset: name: flores200-devtest type: flores200-devtest args: run-por metrics: - name: BLEU type: bleu value: 10.2 - name: chr-F type: chrf value: 0.33836 - task: name: Translation sna-eng type: translation args: sna-eng dataset: name: flores200-devtest type: flores200-devtest args: sna-eng metrics: - name: BLEU type: bleu value: 17.2 - name: chr-F type: chrf value: 0.41974 - task: name: Translation sna-fra type: translation args: sna-fra dataset: name: flores200-devtest type: flores200-devtest args: sna-fra metrics: - name: BLEU type: bleu value: 11.3 - name: chr-F type: chrf value: 0.36443 - task: name: Translation sna-por type: translation args: sna-por dataset: name: flores200-devtest type: flores200-devtest args: sna-por metrics: - name: BLEU type: bleu value: 10.2 - name: chr-F type: chrf value: 0.34260 - task: name: Translation sot-eng type: translation args: sot-eng dataset: name: flores200-devtest type: flores200-devtest args: sot-eng metrics: - name: BLEU type: bleu value: 20.7 - name: chr-F type: chrf value: 0.45415 - task: name: Translation sot-fra type: translation args: sot-fra dataset: name: flores200-devtest type: flores200-devtest args: sot-fra metrics: - name: BLEU type: bleu value: 10.7 - name: chr-F type: chrf value: 0.34608 - task: name: Translation ssw-eng type: translation args: ssw-eng dataset: name: flores200-devtest type: flores200-devtest args: ssw-eng metrics: - name: BLEU type: bleu value: 16.1 - name: chr-F type: chrf value: 0.39768 - task: name: Translation swh-deu type: translation args: swh-deu dataset: name: flores200-devtest type: flores200-devtest args: swh-deu metrics: - name: BLEU type: bleu value: 10.9 - name: chr-F type: chrf value: 0.38892 - task: name: Translation swh-eng type: translation args: swh-eng dataset: name: flores200-devtest type: flores200-devtest args: swh-eng metrics: - name: BLEU type: bleu value: 29.1 - name: chr-F type: chrf value: 0.54048 - task: name: Translation swh-fra type: translation args: swh-fra dataset: name: flores200-devtest type: flores200-devtest args: swh-fra metrics: - name: BLEU type: bleu value: 18.2 - name: chr-F type: chrf value: 0.44837 - task: name: Translation swh-por type: translation args: swh-por dataset: name: flores200-devtest type: flores200-devtest args: swh-por metrics: - name: BLEU type: bleu value: 17.6 - name: chr-F type: chrf value: 0.44062 - task: name: Translation swh-spa type: translation args: swh-spa dataset: name: flores200-devtest type: flores200-devtest args: swh-spa metrics: - name: BLEU type: bleu value: 11.6 - name: chr-F type: chrf value: 0.38855 - task: name: Translation tsn-eng type: translation args: tsn-eng dataset: name: flores200-devtest type: flores200-devtest args: tsn-eng metrics: - name: BLEU type: bleu value: 15.3 - name: chr-F type: chrf value: 0.40410 - task: name: Translation tsn-fra type: translation args: tsn-fra dataset: name: flores200-devtest type: flores200-devtest args: tsn-fra metrics: - name: BLEU type: bleu value: 10.3 - name: chr-F type: chrf value: 0.34284 - task: name: Translation tso-eng type: translation args: tso-eng dataset: name: flores200-devtest type: flores200-devtest args: tso-eng metrics: - name: BLEU type: bleu value: 17.6 - name: chr-F type: chrf value: 0.41504 - task: name: Translation tso-fra type: translation args: tso-fra dataset: name: flores200-devtest type: flores200-devtest args: tso-fra metrics: - name: BLEU type: bleu value: 10.1 - name: chr-F type: chrf value: 0.33502 - task: name: Translation xho-eng type: translation args: xho-eng dataset: name: flores200-devtest type: flores200-devtest args: xho-eng metrics: - name: BLEU type: bleu value: 23.7 - name: chr-F type: chrf value: 0.47667 - task: name: Translation xho-fra type: translation args: xho-fra dataset: name: flores200-devtest type: flores200-devtest args: xho-fra metrics: - name: BLEU type: bleu value: 14.1 - name: chr-F type: chrf value: 0.39392 - task: name: Translation xho-por type: translation args: xho-por dataset: name: flores200-devtest type: flores200-devtest args: xho-por metrics: - name: BLEU type: bleu value: 12.3 - name: chr-F type: chrf value: 0.37032 - task: name: Translation zul-eng type: translation args: zul-eng dataset: name: flores200-devtest type: flores200-devtest args: zul-eng metrics: - name: BLEU type: bleu value: 23.4 - name: chr-F type: chrf value: 0.47798 - task: name: Translation zul-fra type: translation args: zul-fra dataset: name: flores200-devtest type: flores200-devtest args: zul-fra metrics: - name: BLEU type: bleu value: 13.9 - name: chr-F type: chrf value: 0.39504 - task: name: Translation zul-por type: translation args: zul-por dataset: name: flores200-devtest type: flores200-devtest args: zul-por metrics: - name: BLEU type: bleu value: 12.3 - name: chr-F type: chrf value: 0.36947 - task: name: Translation ibo-eng type: translation args: ibo-eng dataset: name: flores101-devtest type: flores_101 args: ibo eng devtest metrics: - name: BLEU type: bleu value: 12.5 - name: chr-F type: chrf value: 0.36320 - task: name: Translation nya-eng type: translation args: nya-eng dataset: name: flores101-devtest type: flores_101 args: nya eng devtest metrics: - name: BLEU type: bleu value: 13.2 - name: chr-F type: chrf value: 0.36765 - task: name: Translation swh-por type: translation args: swh-por dataset: name: flores101-devtest type: flores_101 args: swh por devtest metrics: - name: BLEU type: bleu value: 14.7 - name: chr-F type: chrf value: 0.40847 - task: name: Translation xho-por type: translation args: xho-por dataset: name: flores101-devtest type: flores_101 args: xho por devtest metrics: - name: BLEU type: bleu value: 10.1 - name: chr-F type: chrf value: 0.33906 - task: name: Translation zul-fra type: translation args: zul-fra dataset: name: flores101-devtest type: flores_101 args: zul fra devtest metrics: - name: BLEU type: bleu value: 11.2 - name: chr-F type: chrf value: 0.35968 - task: name: Translation bem-eng type: translation args: bem-eng dataset: name: ntrex128 type: ntrex128 args: bem-eng metrics: - name: BLEU type: bleu value: 14.9 - name: chr-F type: chrf value: 0.38068 - task: name: Translation bem-spa type: translation args: bem-spa dataset: name: ntrex128 type: ntrex128 args: bem-spa metrics: - name: BLEU type: bleu value: 10.2 - name: chr-F type: chrf value: 0.33394 - task: name: Translation ibo-eng type: translation args: ibo-eng dataset: name: ntrex128 type: ntrex128 args: ibo-eng metrics: - name: BLEU type: bleu value: 17.4 - name: chr-F type: chrf value: 0.42002 - task: name: Translation ibo-fra type: translation args: ibo-fra dataset: name: ntrex128 type: ntrex128 args: ibo-fra metrics: - name: BLEU type: bleu value: 10.3 - name: chr-F type: chrf value: 0.34993 - task: name: Translation ibo-spa type: translation args: ibo-spa dataset: name: ntrex128 type: ntrex128 args: ibo-spa metrics: - name: BLEU type: bleu value: 11.7 - name: chr-F type: chrf value: 0.36108 - task: name: Translation kin-eng type: translation args: kin-eng dataset: name: ntrex128 type: ntrex128 args: kin-eng metrics: - name: BLEU type: bleu value: 16.9 - name: chr-F type: chrf value: 0.42892 - task: name: Translation kin-fra type: translation args: kin-fra dataset: name: ntrex128 type: ntrex128 args: kin-fra metrics: - name: BLEU type: bleu value: 10.7 - name: chr-F type: chrf value: 0.35842 - task: name: Translation kin-por type: translation args: kin-por dataset: name: ntrex128 type: ntrex128 args: kin-por metrics: - name: BLEU type: bleu value: 10.0 - name: chr-F type: chrf value: 0.34399 - task: name: Translation kin-spa type: translation args: kin-spa dataset: name: ntrex128 type: ntrex128 args: kin-spa metrics: - name: BLEU type: bleu value: 12.5 - name: chr-F type: chrf value: 0.37224 - task: name: Translation nde-eng type: translation args: nde-eng dataset: name: ntrex128 type: ntrex128 args: nde-eng metrics: - name: BLEU type: bleu value: 13.8 - name: chr-F type: chrf value: 0.39640 - task: name: Translation nde-spa type: translation args: nde-spa dataset: name: ntrex128 type: ntrex128 args: nde-spa metrics: - name: BLEU type: bleu value: 10.0 - name: chr-F type: chrf value: 0.34638 - task: name: Translation nso-eng type: translation args: nso-eng dataset: name: ntrex128 type: ntrex128 args: nso-eng metrics: - name: BLEU type: bleu value: 17.0 - name: chr-F type: chrf value: 0.42278 - task: name: Translation nso-spa type: translation args: nso-spa dataset: name: ntrex128 type: ntrex128 args: nso-spa metrics: - name: BLEU type: bleu value: 11.3 - name: chr-F type: chrf value: 0.35027 - task: name: Translation nya-eng type: translation args: nya-eng dataset: name: ntrex128 type: ntrex128 args: nya-eng metrics: - name: BLEU type: bleu value: 19.2 - name: chr-F type: chrf value: 0.42702 - task: name: Translation nya-fra type: translation args: nya-fra dataset: name: ntrex128 type: ntrex128 args: nya-fra metrics: - name: BLEU type: bleu value: 10.7 - name: chr-F type: chrf value: 0.35503 - task: name: Translation nya-por type: translation args: nya-por dataset: name: ntrex128 type: ntrex128 args: nya-por metrics: - name: BLEU type: bleu value: 10.4 - name: chr-F type: chrf value: 0.33912 - task: name: Translation nya-spa type: translation args: nya-spa dataset: name: ntrex128 type: ntrex128 args: nya-spa metrics: - name: BLEU type: bleu value: 12.5 - name: chr-F type: chrf value: 0.36355 - task: name: Translation ssw-eng type: translation args: ssw-eng dataset: name: ntrex128 type: ntrex128 args: ssw-eng metrics: - name: BLEU type: bleu value: 18.0 - name: chr-F type: chrf value: 0.43041 - task: name: Translation ssw-spa type: translation args: ssw-spa dataset: name: ntrex128 type: ntrex128 args: ssw-spa metrics: - name: BLEU type: bleu value: 11.4 - name: chr-F type: chrf value: 0.35392 - task: name: Translation swa-deu type: translation args: swa-deu dataset: name: ntrex128 type: ntrex128 args: swa-deu metrics: - name: BLEU type: bleu value: 11.2 - name: chr-F type: chrf value: 0.39475 - task: name: Translation swa-eng type: translation args: swa-eng dataset: name: ntrex128 type: ntrex128 args: swa-eng metrics: - name: BLEU type: bleu value: 30.4 - name: chr-F type: chrf value: 0.54492 - task: name: Translation swa-fra type: translation args: swa-fra dataset: name: ntrex128 type: ntrex128 args: swa-fra metrics: - name: BLEU type: bleu value: 15.6 - name: chr-F type: chrf value: 0.43008 - task: name: Translation swa-por type: translation args: swa-por dataset: name: ntrex128 type: ntrex128 args: swa-por metrics: - name: BLEU type: bleu value: 15.4 - name: chr-F type: chrf value: 0.42343 - task: name: Translation swa-spa type: translation args: swa-spa dataset: name: ntrex128 type: ntrex128 args: swa-spa metrics: - name: BLEU type: bleu value: 18.9 - name: chr-F type: chrf value: 0.44892 - task: name: Translation tsn-eng type: translation args: tsn-eng dataset: name: ntrex128 type: ntrex128 args: tsn-eng metrics: - name: BLEU type: bleu value: 20.1 - name: chr-F type: chrf value: 0.44944 - task: name: Translation tsn-fra type: translation args: tsn-fra dataset: name: ntrex128 type: ntrex128 args: tsn-fra metrics: - name: BLEU type: bleu value: 11.5 - name: chr-F type: chrf value: 0.36584 - task: name: Translation tsn-por type: translation args: tsn-por dataset: name: ntrex128 type: ntrex128 args: tsn-por metrics: - name: BLEU type: bleu value: 11.1 - name: chr-F type: chrf value: 0.35774 - task: name: Translation tsn-spa type: translation args: tsn-spa dataset: name: ntrex128 type: ntrex128 args: tsn-spa metrics: - name: BLEU type: bleu value: 13.8 - name: chr-F type: chrf value: 0.38149 - task: name: Translation ven-eng type: translation args: ven-eng dataset: name: ntrex128 type: ntrex128 args: ven-eng metrics: - name: BLEU type: bleu value: 12.7 - name: chr-F type: chrf value: 0.36848 - task: name: Translation xho-eng type: translation args: xho-eng dataset: name: ntrex128 type: ntrex128 args: xho-eng metrics: - name: BLEU type: bleu value: 21.8 - name: chr-F type: chrf value: 0.46636 - task: name: Translation xho-fra type: translation args: xho-fra dataset: name: ntrex128 type: ntrex128 args: xho-fra metrics: - name: BLEU type: bleu value: 11.6 - name: chr-F type: chrf value: 0.36905 - task: name: Translation xho-por type: translation args: xho-por dataset: name: ntrex128 type: ntrex128 args: xho-por metrics: - name: BLEU type: bleu value: 10.6 - name: chr-F type: chrf value: 0.35687 - task: name: Translation xho-spa type: translation args: xho-spa dataset: name: ntrex128 type: ntrex128 args: xho-spa metrics: - name: BLEU type: bleu value: 13.3 - name: chr-F type: chrf value: 0.38176 - task: name: Translation yor-eng type: translation args: yor-eng dataset: name: ntrex128 type: ntrex128 args: yor-eng metrics: - name: BLEU type: bleu value: 11.2 - name: chr-F type: chrf value: 0.34615 - task: name: Translation zul-eng type: translation args: zul-eng dataset: name: ntrex128 type: ntrex128 args: zul-eng metrics: - name: BLEU type: bleu value: 21.9 - name: chr-F type: chrf value: 0.45848 - task: name: Translation zul-fra type: translation args: zul-fra dataset: name: ntrex128 type: ntrex128 args: zul-fra metrics: - name: BLEU type: bleu value: 11.4 - name: chr-F type: chrf value: 0.36203 - task: name: Translation zul-por type: translation args: zul-por dataset: name: ntrex128 type: ntrex128 args: zul-por metrics: - name: BLEU type: bleu value: 10.8 - name: chr-F type: chrf value: 0.35081 - task: name: Translation zul-spa type: translation args: zul-spa dataset: name: ntrex128 type: ntrex128 args: zul-spa metrics: - name: BLEU type: bleu value: 13.0 - name: chr-F type: chrf value: 0.37270 - task: name: Translation multi-multi type: translation args: multi-multi dataset: name: tatoeba-test-v2020-07-28-v2023-09-26 type: tatoeba_mt args: multi-multi metrics: - name: BLEU type: bleu value: 27.3 - name: chr-F type: chrf value: 0.44751 - task: name: Translation run-eng type: translation args: run-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: run-eng metrics: - name: BLEU type: bleu value: 34.9 - name: chr-F type: chrf value: 0.49949 - task: name: Translation run-fra type: translation args: run-fra dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: run-fra metrics: - name: BLEU type: bleu value: 22.4 - name: chr-F type: chrf value: 0.41431 - task: name: Translation swa-eng type: translation args: swa-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: swa-eng metrics: - name: BLEU type: bleu value: 41.5 - name: chr-F type: chrf value: 0.57031 - task: name: Translation xho-eng type: translation args: xho-eng dataset: name: tatoeba-test-v2021-03-30 type: tatoeba_mt args: xho-eng metrics: - name: BLEU type: bleu value: 36.4 - name: chr-F type: chrf value: 0.51250 - task: name: Translation zul-eng type: translation args: zul-eng dataset: name: tico19-test type: tico19-test args: zul-eng metrics: - name: BLEU type: bleu value: 25.2 - name: chr-F type: chrf value: 0.48762 - task: name: Translation zul-fra type: translation args: zul-fra dataset: name: tico19-test type: tico19-test args: zul-fra metrics: - name: BLEU type: bleu value: 13.7 - name: chr-F type: chrf value: 0.37823 - task: name: Translation zul-por type: translation args: zul-por dataset: name: tico19-test type: tico19-test args: zul-por metrics: - name: BLEU type: bleu value: 14.0 - name: chr-F type: chrf value: 0.38586 - task: name: Translation zul-spa type: translation args: zul-spa dataset: name: tico19-test type: tico19-test args: zul-spa metrics: - name: BLEU type: bleu value: 15.9 - name: chr-F type: chrf value: 0.40041 --- # opus-mt-tc-bible-big-alv-deu_eng_fra_por_spa ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [Acknowledgements](#acknowledgements) ## Model Details Neural machine translation model for translating from Atlantic-Congo languages (alv) to unknown (deu+eng+fra+por+spa). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2024-05-30 - **License:** Apache-2.0 - **Language(s):** - Source Language(s): abi acd ade adj aka akp ann anv atg avn bas bav bba beh bem bfd bfo bim biv bkv blh bmq bmv bom bov box bqj bss btt bud bwu cce cjk cko cme csk cwe cwt dag dga dgi dig dop dug dyi dyo efi ewe fal fon fuc ful gej gkn gng gog gud gur guw gux gwr hag hay heh her ibo ife iri izr jbu jmc kam kbp kdc kdl kdn ken keu kez kia kik kin kki kkj kma kmb kon ksb ktj kua kub kus kyf las lee lef lem lia lin lip lob lon lua lug luy maw mcp mcu mda mfq mgo mnf mnh mor mos muh myk myx mzk mzm mzw nbl ncu nde ndo ndz nfr nhu nim nin nmz nnb nnh nnw nso ntm ntr nuj nwb nya nyf nyn nyo nyy nzi oku old ozm pai pbl pkb rim run sag seh sig sil sld sna snw sot soy spp ssw suk swa swc swh sxb tbz tem thk tik tlj toh toi tpm tsn tso tsw tum twi umb vag ven vmw vun wmw wob wol xho xog xon xrb xsm xuo yam yaz yor zul - Target Language(s): deu eng fra por spa - Valid Target Language Labels: >>deu<< >>eng<< >>fra<< >>por<< >>spa<< >>xxx<< - **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/alv-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip) - **Resources for more information:** - [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/alv-deu%2Beng%2Bfra%2Bpor%2Bspa/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-30) - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/) - [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1) - [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/) This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>deu<<` ## Uses This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>deu<< Replace this with text in an accepted source language.", ">>spa<< This is the second sentence." ] model_name = "pytorch-models/opus-mt-tc-bible-big-alv-deu_eng_fra_por_spa" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-alv-deu_eng_fra_por_spa") print(pipe(">>deu<< Replace this with text in an accepted source language.")) ``` ## Training - **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/alv-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Evaluation * [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/alv-deu%2Beng%2Bfra%2Bpor%2Bspa/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-30) * test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/alv-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt) * test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/alv-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | run-eng | tatoeba-test-v2021-08-07 | 0.49949 | 34.9 | 1703 | 10041 | | run-fra | tatoeba-test-v2021-08-07 | 0.41431 | 22.4 | 1274 | 7479 | | swa-eng | tatoeba-test-v2021-08-07 | 0.57031 | 41.5 | 387 | 2508 | | swh-por | flores101-devtest | 0.40847 | 14.7 | 1012 | 26519 | | kin-eng | flores200-devtest | 0.41964 | 18.1 | 1012 | 24721 | | nso-eng | flores200-devtest | 0.45662 | 22.3 | 1012 | 24721 | | sna-eng | flores200-devtest | 0.41974 | 17.2 | 1012 | 24721 | | sot-eng | flores200-devtest | 0.45415 | 20.7 | 1012 | 24721 | | swh-eng | flores200-devtest | 0.54048 | 29.1 | 1012 | 24721 | | swh-fra | flores200-devtest | 0.44837 | 18.2 | 1012 | 28343 | | swh-por | flores200-devtest | 0.44062 | 17.6 | 1012 | 26519 | | tsn-eng | flores200-devtest | 0.40410 | 15.3 | 1012 | 24721 | | tso-eng | flores200-devtest | 0.41504 | 17.6 | 1012 | 24721 | | xho-eng | flores200-devtest | 0.47667 | 23.7 | 1012 | 24721 | | zul-eng | flores200-devtest | 0.47798 | 23.4 | 1012 | 24721 | | ibo-eng | ntrex128 | 0.42002 | 17.4 | 1997 | 47673 | | kin-eng | ntrex128 | 0.42892 | 16.9 | 1997 | 47673 | | nso-eng | ntrex128 | 0.42278 | 17.0 | 1997 | 47673 | | nya-eng | ntrex128 | 0.42702 | 19.2 | 1997 | 47673 | | ssw-eng | ntrex128 | 0.43041 | 18.0 | 1997 | 47673 | | swa-eng | ntrex128 | 0.54492 | 30.4 | 1997 | 47673 | | swa-fra | ntrex128 | 0.43008 | 15.6 | 1997 | 53481 | | swa-por | ntrex128 | 0.42343 | 15.4 | 1997 | 51631 | | swa-spa | ntrex128 | 0.44892 | 18.9 | 1997 | 54107 | | tsn-eng | ntrex128 | 0.44944 | 20.1 | 1997 | 47673 | | xho-eng | ntrex128 | 0.46636 | 21.8 | 1997 | 47673 | | zul-eng | ntrex128 | 0.45848 | 21.9 | 1997 | 47673 | | zul-eng | tico19-test | 0.48762 | 25.2 | 2100 | 56804 | | zul-spa | tico19-test | 0.40041 | 15.9 | 2100 | 66563 | ## Citation Information * Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ```bibtex @article{tiedemann2023democratizing, title={Democratizing neural machine translation with {OPUS-MT}}, author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami}, journal={Language Resources and Evaluation}, number={58}, pages={713--755}, year={2023}, publisher={Springer Nature}, issn={1574-0218}, doi={10.1007/s10579-023-09704-w} } @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Acknowledgements The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/). ## Model conversion info * transformers version: 4.45.1 * OPUS-MT git hash: a0ea3b3 * port time: Mon Oct 7 17:13:22 EEST 2024 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-bible-big-afa-fra_ita_por_spa
Helsinki-NLP
2024-10-07T14:12:30Z
109
0
transformers
[ "transformers", "pytorch", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc-bible", "am", "ar", "bcw", "cop", "daa", "dsh", "es", "fr", "gde", "gnd", "ha", "hbo", "he", "hig", "irk", "it", "jpa", "kab", "ker", "kqp", "ktb", "kxc", "lln", "lme", "meq", "mfh", "mfi", "mfk", "mif", "mpg", "mqb", "mt", "muy", "oar", "pbi", "phn", "pt", "rif", "sgw", "shi", "shy", "so", "sur", "syc", "thv", "ti", "tmc", "tmh", "tmr", "ttr", "wal", "xed", "zgh", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-10-07T13:16:02Z
--- library_name: transformers language: - am - ar - bcw - cop - daa - dsh - es - fr - gde - gnd - ha - hbo - he - hig - irk - it - jpa - kab - ker - kqp - ktb - kxc - lln - lme - meq - mfh - mfi - mfk - mif - mpg - mqb - mt - muy - oar - pbi - phn - pt - rif - sgw - shi - shy - so - sur - syc - thv - ti - tmc - tmh - tmr - ttr - wal - xed - zgh tags: - translation - opus-mt-tc-bible license: apache-2.0 model-index: - name: opus-mt-tc-bible-big-afa-fra_ita_por_spa results: - task: name: Translation multi-multi type: translation args: multi-multi dataset: name: tatoeba-test-v2020-07-28-v2023-09-26 type: tatoeba_mt args: multi-multi metrics: - name: BLEU type: bleu value: 25.6 - name: chr-F type: chrf value: 0.44153 --- # opus-mt-tc-bible-big-afa-fra_ita_por_spa ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [Acknowledgements](#acknowledgements) ## Model Details Neural machine translation model for translating from Afro-Asiatic languages (afa) to unknown (fra+ita+por+spa). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2024-08-17 - **License:** Apache-2.0 - **Language(s):** - Source Language(s): amh apc ara arq arz bcw cop daa dsh gde gnd hau hbo heb hig irk jpa kab ker kqp ktb kxc lln lme meq mfh mfi mfk mif mlt mpg mqb muy oar pbi phn rif sgw shi shy som sur syc thv tir tmc tmh tmr ttr wal xed zgh - Target Language(s): fra ita por spa - Valid Target Language Labels: >>fra<< >>ita<< >>por<< >>spa<< >>xxx<< - **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-fra+ita+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip) - **Resources for more information:** - [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/afa-fra%2Bita%2Bpor%2Bspa/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17) - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/) - [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1) - [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/) This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>fra<<` ## Uses This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>fra<< Aqcic amenzu yessaweḍ ad yesleg tukkest-is.", ">>fra<< Aɣet ihi adlis." ] model_name = "pytorch-models/opus-mt-tc-bible-big-afa-fra_ita_por_spa" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # L'enfant a réussi à l'enlever. # Alors, rédigez un livre. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-afa-fra_ita_por_spa") print(pipe(">>fra<< Aqcic amenzu yessaweḍ ad yesleg tukkest-is.")) # expected output: L'enfant a réussi à l'enlever. ``` ## Training - **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-fra+ita+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Evaluation * [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/afa-fra%2Bita%2Bpor%2Bspa/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17) * test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-fra+ita+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt) * test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-fra+ita+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | multi-multi | tatoeba-test-v2020-07-28-v2023-09-26 | 0.44153 | 25.6 | 10000 | 78439 | ## Citation Information * Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ```bibtex @article{tiedemann2023democratizing, title={Democratizing neural machine translation with {OPUS-MT}}, author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami}, journal={Language Resources and Evaluation}, number={58}, pages={713--755}, year={2023}, publisher={Springer Nature}, issn={1574-0218}, doi={10.1007/s10579-023-09704-w} } @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Acknowledgements The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/). ## Model conversion info * transformers version: 4.45.1 * OPUS-MT git hash: a0ea3b3 * port time: Mon Oct 7 17:12:20 EEST 2024 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-tc-bible-big-afa-deu_eng_nld
Helsinki-NLP
2024-10-07T14:09:57Z
126
0
transformers
[ "transformers", "pytorch", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc-bible", "aa", "am", "ar", "arc", "bcw", "byn", "cop", "daa", "de", "dsh", "en", "gde", "gnd", "ha", "hbo", "he", "hig", "irk", "jpa", "kab", "ker", "kqp", "ktb", "kxc", "lln", "lme", "meq", "mfh", "mfi", "mfk", "mif", "mpg", "mqb", "mt", "muy", "nl", "oar", "om", "pbi", "phn", "rif", "sgw", "shi", "shy", "so", "sur", "syc", "syr", "taq", "ti", "tig", "tmc", "tmh", "tmr", "ttr", "tzm", "wal", "xed", "zgh", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-10-07T13:08:25Z
--- library_name: transformers language: - aa - am - ar - arc - bcw - byn - cop - daa - de - dsh - en - gde - gnd - ha - hbo - he - hig - irk - jpa - kab - ker - kqp - ktb - kxc - lln - lme - meq - mfh - mfi - mfk - mif - mpg - mqb - mt - muy - nl - oar - om - pbi - phn - rif - sgw - shi - shy - so - sur - syc - syr - taq - ti - tig - tmc - tmh - tmr - ttr - tzm - wal - xed - zgh tags: - translation - opus-mt-tc-bible license: apache-2.0 model-index: - name: opus-mt-tc-bible-big-afa-deu_eng_nld results: - task: name: Translation multi-multi type: translation args: multi-multi dataset: name: tatoeba-test-v2020-07-28-v2023-09-26 type: tatoeba_mt args: multi-multi metrics: - name: BLEU type: bleu value: 39.9 - name: chr-F type: chrf value: 0.57350 --- # opus-mt-tc-bible-big-afa-deu_eng_nld ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [Acknowledgements](#acknowledgements) ## Model Details Neural machine translation model for translating from Afro-Asiatic languages (afa) to unknown (deu+eng+nld). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2024-08-17 - **License:** Apache-2.0 - **Language(s):** - Source Language(s): aar acm afb amh apc ara arc arq arz bcw byn cop daa dsh gde gnd hau hbo heb hig irk jpa kab ker kqp ktb kxc lln lme meq mfh mfi mfk mif mlt mpg mqb muy oar orm pbi phn rif sgw shi shy som sur syc syr taq tig tir tmc tmh tmr ttr tzm wal xed zgh - Target Language(s): deu eng nld - Valid Target Language Labels: >>deu<< >>eng<< >>nld<< >>xxx<< - **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-deu+eng+nld/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip) - **Resources for more information:** - [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/afa-deu%2Beng%2Bnld/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17) - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/) - [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1) - [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/) This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>deu<<` ## Uses This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>eng<< هذا هو المكان الذي تعيش فيه.", ">>eng<< Amdan yesnulfa-d Ṛebbi akken kan wa ur ineqq wa." ] model_name = "pytorch-models/opus-mt-tc-bible-big-afa-deu_eng_nld" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # This is where you live. # The man who had been killed by God didn't kill him. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-afa-deu_eng_nld") print(pipe(">>eng<< هذا هو المكان الذي تعيش فيه.")) # expected output: This is where you live. ``` ## Training - **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-deu+eng+nld/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Evaluation * [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/afa-deu%2Beng%2Bnld/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17) * test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-deu+eng+nld/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt) * test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-deu+eng+nld/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | multi-multi | tatoeba-test-v2020-07-28-v2023-09-26 | 0.57350 | 39.9 | 10000 | 73314 | ## Citation Information * Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ```bibtex @article{tiedemann2023democratizing, title={Democratizing neural machine translation with {OPUS-MT}}, author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami}, journal={Language Resources and Evaluation}, number={58}, pages={713--755}, year={2023}, publisher={Springer Nature}, issn={1574-0218}, doi={10.1007/s10579-023-09704-w} } @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Acknowledgements The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/). ## Model conversion info * transformers version: 4.45.1 * OPUS-MT git hash: a0ea3b3 * port time: Mon Oct 7 17:09:45 EEST 2024 * port machine: LM0-400-22516.local
irlab-udc/Llama-3.1-8B-Instruct-Galician-GPTQ-Int4
irlab-udc
2024-10-07T14:09:21Z
81
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "gl", "base_model:irlab-udc/Llama-3.1-8B-Instruct-Galician", "base_model:quantized:irlab-udc/Llama-3.1-8B-Instruct-Galician", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-10-07T13:14:44Z
--- base_model: - irlab-udc/Llama-3.1-8B-Instruct-Galician license: llama3.1 language: - gl pipeline_tag: text-generation library_name: transformers --- 4-bit quantized version of [irlab-udc/Llama-3.1-8B-Instruct-Galician](https://huggingface.co/irlab-udc/Llama-3.1-8B-Instruct-Galician). ## How to Use ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "irlab-udc/Llama-3.1-8B-Instruct-Galician-GPTQ-Int4" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto" ) messages = [ {"role": "system", "content": "You are a conversational AI that responds in Galician."}, {"role": "user", "content": "Cal é a principal vantaxe de Scrum?"}, ] inputs = tokenizer.apply_chat_template( messages, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True, ).to("cuda") outputs = model.generate(**inputs, do_sample=True, max_new_tokens=512) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ```
Helsinki-NLP/opus-mt-tc-bible-big-afa-deu_eng_fra_por_spa
Helsinki-NLP
2024-10-07T14:08:42Z
217
0
transformers
[ "transformers", "pytorch", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc-bible", "aa", "am", "ar", "arc", "bcw", "byn", "cop", "daa", "de", "dsh", "en", "es", "fr", "gde", "gnd", "ha", "hbo", "he", "hig", "irk", "jpa", "kab", "ker", "kqp", "ktb", "kxc", "lln", "lme", "meq", "mfh", "mfi", "mfk", "mif", "mpg", "mqb", "mt", "muy", "oar", "om", "pbi", "phn", "pt", "rif", "sgw", "shi", "shy", "so", "sur", "syc", "syr", "taq", "thv", "ti", "tig", "tmc", "tmh", "tmr", "ttr", "tzm", "wal", "xed", "zgh", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-10-07T13:02:02Z
--- library_name: transformers language: - aa - am - ar - arc - bcw - byn - cop - daa - de - dsh - en - es - fr - gde - gnd - ha - hbo - he - hig - irk - jpa - kab - ker - kqp - ktb - kxc - lln - lme - meq - mfh - mfi - mfk - mif - mpg - mqb - mt - muy - oar - om - pbi - phn - pt - rif - sgw - shi - shy - so - sur - syc - syr - taq - thv - ti - tig - tmc - tmh - tmr - ttr - tzm - wal - xed - zgh tags: - translation - opus-mt-tc-bible license: apache-2.0 model-index: - name: opus-mt-tc-bible-big-afa-deu_eng_fra_por_spa results: - task: name: Translation acm-deu type: translation args: acm-deu dataset: name: flores200-devtest type: flores200-devtest args: acm-deu metrics: - name: BLEU type: bleu value: 17.6 - name: chr-F type: chrf value: 0.48947 - task: name: Translation acm-eng type: translation args: acm-eng dataset: name: flores200-devtest type: flores200-devtest args: acm-eng metrics: - name: BLEU type: bleu value: 28.5 - name: chr-F type: chrf value: 0.56799 - task: name: Translation acm-fra type: translation args: acm-fra dataset: name: flores200-devtest type: flores200-devtest args: acm-fra metrics: - name: BLEU type: bleu value: 26.1 - name: chr-F type: chrf value: 0.53577 - task: name: Translation acm-por type: translation args: acm-por dataset: name: flores200-devtest type: flores200-devtest args: acm-por metrics: - name: BLEU type: bleu value: 23.9 - name: chr-F type: chrf value: 0.52441 - task: name: Translation acm-spa type: translation args: acm-spa dataset: name: flores200-devtest type: flores200-devtest args: acm-spa metrics: - name: BLEU type: bleu value: 18.2 - name: chr-F type: chrf value: 0.46985 - task: name: Translation amh-deu type: translation args: amh-deu dataset: name: flores200-devtest type: flores200-devtest args: amh-deu metrics: - name: BLEU type: bleu value: 12.6 - name: chr-F type: chrf value: 0.41553 - task: name: Translation amh-eng type: translation args: amh-eng dataset: name: flores200-devtest type: flores200-devtest args: amh-eng metrics: - name: BLEU type: bleu value: 22.5 - name: chr-F type: chrf value: 0.49333 - task: name: Translation amh-fra type: translation args: amh-fra dataset: name: flores200-devtest type: flores200-devtest args: amh-fra metrics: - name: BLEU type: bleu value: 17.8 - name: chr-F type: chrf value: 0.44890 - task: name: Translation amh-por type: translation args: amh-por dataset: name: flores200-devtest type: flores200-devtest args: amh-por metrics: - name: BLEU type: bleu value: 16.5 - name: chr-F type: chrf value: 0.43771 - task: name: Translation amh-spa type: translation args: amh-spa dataset: name: flores200-devtest type: flores200-devtest args: amh-spa metrics: - name: BLEU type: bleu value: 12.8 - name: chr-F type: chrf value: 0.39526 - task: name: Translation apc-deu type: translation args: apc-deu dataset: name: flores200-devtest type: flores200-devtest args: apc-deu metrics: - name: BLEU type: bleu value: 16.0 - name: chr-F type: chrf value: 0.47480 - task: name: Translation apc-eng type: translation args: apc-eng dataset: name: flores200-devtest type: flores200-devtest args: apc-eng metrics: - name: BLEU type: bleu value: 28.1 - name: chr-F type: chrf value: 0.56075 - task: name: Translation apc-fra type: translation args: apc-fra dataset: name: flores200-devtest type: flores200-devtest args: apc-fra metrics: - name: BLEU type: bleu value: 24.6 - name: chr-F type: chrf value: 0.52325 - task: name: Translation apc-por type: translation args: apc-por dataset: name: flores200-devtest type: flores200-devtest args: apc-por metrics: - name: BLEU type: bleu value: 22.9 - name: chr-F type: chrf value: 0.51055 - task: name: Translation apc-spa type: translation args: apc-spa dataset: name: flores200-devtest type: flores200-devtest args: apc-spa metrics: - name: BLEU type: bleu value: 17.2 - name: chr-F type: chrf value: 0.45634 - task: name: Translation arz-deu type: translation args: arz-deu dataset: name: flores200-devtest type: flores200-devtest args: arz-deu metrics: - name: BLEU type: bleu value: 14.1 - name: chr-F type: chrf value: 0.45844 - task: name: Translation arz-eng type: translation args: arz-eng dataset: name: flores200-devtest type: flores200-devtest args: arz-eng metrics: - name: BLEU type: bleu value: 22.7 - name: chr-F type: chrf value: 0.52534 - task: name: Translation arz-fra type: translation args: arz-fra dataset: name: flores200-devtest type: flores200-devtest args: arz-fra metrics: - name: BLEU type: bleu value: 21.8 - name: chr-F type: chrf value: 0.50336 - task: name: Translation arz-por type: translation args: arz-por dataset: name: flores200-devtest type: flores200-devtest args: arz-por metrics: - name: BLEU type: bleu value: 20.0 - name: chr-F type: chrf value: 0.48741 - task: name: Translation arz-spa type: translation args: arz-spa dataset: name: flores200-devtest type: flores200-devtest args: arz-spa metrics: - name: BLEU type: bleu value: 15.8 - name: chr-F type: chrf value: 0.44516 - task: name: Translation hau-eng type: translation args: hau-eng dataset: name: flores200-devtest type: flores200-devtest args: hau-eng metrics: - name: BLEU type: bleu value: 23.4 - name: chr-F type: chrf value: 0.48137 - task: name: Translation hau-fra type: translation args: hau-fra dataset: name: flores200-devtest type: flores200-devtest args: hau-fra metrics: - name: BLEU type: bleu value: 17.2 - name: chr-F type: chrf value: 0.42981 - task: name: Translation hau-por type: translation args: hau-por dataset: name: flores200-devtest type: flores200-devtest args: hau-por metrics: - name: BLEU type: bleu value: 15.7 - name: chr-F type: chrf value: 0.41385 - task: name: Translation hau-spa type: translation args: hau-spa dataset: name: flores200-devtest type: flores200-devtest args: hau-spa metrics: - name: BLEU type: bleu value: 11.7 - name: chr-F type: chrf value: 0.37800 - task: name: Translation heb-deu type: translation args: heb-deu dataset: name: flores200-devtest type: flores200-devtest args: heb-deu metrics: - name: BLEU type: bleu value: 22.8 - name: chr-F type: chrf value: 0.53482 - task: name: Translation heb-eng type: translation args: heb-eng dataset: name: flores200-devtest type: flores200-devtest args: heb-eng metrics: - name: BLEU type: bleu value: 38.0 - name: chr-F type: chrf value: 0.63368 - task: name: Translation heb-fra type: translation args: heb-fra dataset: name: flores200-devtest type: flores200-devtest args: heb-fra metrics: - name: BLEU type: bleu value: 32.6 - name: chr-F type: chrf value: 0.58417 - task: name: Translation heb-por type: translation args: heb-por dataset: name: flores200-devtest type: flores200-devtest args: heb-por metrics: - name: BLEU type: bleu value: 30.7 - name: chr-F type: chrf value: 0.57140 - task: name: Translation kab-eng type: translation args: kab-eng dataset: name: flores200-devtest type: flores200-devtest args: kab-eng metrics: - name: BLEU type: bleu value: 10.0 - name: chr-F type: chrf value: 0.32121 - task: name: Translation mlt-eng type: translation args: mlt-eng dataset: name: flores200-devtest type: flores200-devtest args: mlt-eng metrics: - name: BLEU type: bleu value: 51.1 - name: chr-F type: chrf value: 0.73415 - task: name: Translation mlt-fra type: translation args: mlt-fra dataset: name: flores200-devtest type: flores200-devtest args: mlt-fra metrics: - name: BLEU type: bleu value: 35.8 - name: chr-F type: chrf value: 0.61626 - task: name: Translation mlt-spa type: translation args: mlt-spa dataset: name: flores200-devtest type: flores200-devtest args: mlt-spa metrics: - name: BLEU type: bleu value: 21.8 - name: chr-F type: chrf value: 0.50534 - task: name: Translation som-eng type: translation args: som-eng dataset: name: flores200-devtest type: flores200-devtest args: som-eng metrics: - name: BLEU type: bleu value: 17.7 - name: chr-F type: chrf value: 0.42764 - task: name: Translation som-fra type: translation args: som-fra dataset: name: flores200-devtest type: flores200-devtest args: som-fra metrics: - name: BLEU type: bleu value: 14.4 - name: chr-F type: chrf value: 0.39536 - task: name: Translation som-por type: translation args: som-por dataset: name: flores200-devtest type: flores200-devtest args: som-por metrics: - name: BLEU type: bleu value: 14.1 - name: chr-F type: chrf value: 0.39440 - task: name: Translation som-spa type: translation args: som-spa dataset: name: flores200-devtest type: flores200-devtest args: som-spa metrics: - name: BLEU type: bleu value: 10.0 - name: chr-F type: chrf value: 0.35321 - task: name: Translation tir-eng type: translation args: tir-eng dataset: name: flores200-devtest type: flores200-devtest args: tir-eng metrics: - name: BLEU type: bleu value: 11.6 - name: chr-F type: chrf value: 0.37119 - task: name: Translation amh-deu type: translation args: amh-deu dataset: name: flores101-devtest type: flores_101 args: amh deu devtest metrics: - name: BLEU type: bleu value: 11.5 - name: chr-F type: chrf value: 0.39890 - task: name: Translation amh-eng type: translation args: amh-eng dataset: name: flores101-devtest type: flores_101 args: amh eng devtest metrics: - name: BLEU type: bleu value: 21.0 - name: chr-F type: chrf value: 0.47357 - task: name: Translation amh-fra type: translation args: amh-fra dataset: name: flores101-devtest type: flores_101 args: amh fra devtest metrics: - name: BLEU type: bleu value: 16.2 - name: chr-F type: chrf value: 0.43155 - task: name: Translation amh-por type: translation args: amh-por dataset: name: flores101-devtest type: flores_101 args: amh por devtest metrics: - name: BLEU type: bleu value: 15.1 - name: chr-F type: chrf value: 0.42109 - task: name: Translation amh-spa type: translation args: amh-spa dataset: name: flores101-devtest type: flores_101 args: amh spa devtest metrics: - name: BLEU type: bleu value: 11.5 - name: chr-F type: chrf value: 0.38003 - task: name: Translation ara-deu type: translation args: ara-deu dataset: name: flores101-devtest type: flores_101 args: ara deu devtest metrics: - name: BLEU type: bleu value: 20.4 - name: chr-F type: chrf value: 0.51110 - task: name: Translation ara-fra type: translation args: ara-fra dataset: name: flores101-devtest type: flores_101 args: ara fra devtest metrics: - name: BLEU type: bleu value: 29.7 - name: chr-F type: chrf value: 0.56934 - task: name: Translation ara-por type: translation args: ara-por dataset: name: flores101-devtest type: flores_101 args: ara por devtest metrics: - name: BLEU type: bleu value: 28.2 - name: chr-F type: chrf value: 0.55727 - task: name: Translation ara-spa type: translation args: ara-spa dataset: name: flores101-devtest type: flores_101 args: ara spa devtest metrics: - name: BLEU type: bleu value: 19.5 - name: chr-F type: chrf value: 0.48350 - task: name: Translation hau-eng type: translation args: hau-eng dataset: name: flores101-devtest type: flores_101 args: hau eng devtest metrics: - name: BLEU type: bleu value: 21.6 - name: chr-F type: chrf value: 0.46804 - task: name: Translation hau-fra type: translation args: hau-fra dataset: name: flores101-devtest type: flores_101 args: hau fra devtest metrics: - name: BLEU type: bleu value: 15.9 - name: chr-F type: chrf value: 0.41827 - task: name: Translation hau-spa type: translation args: hau-spa dataset: name: flores101-devtest type: flores_101 args: hau spa devtest metrics: - name: BLEU type: bleu value: 11.5 - name: chr-F type: chrf value: 0.37201 - task: name: Translation heb-eng type: translation args: heb-eng dataset: name: flores101-devtest type: flores_101 args: heb eng devtest metrics: - name: BLEU type: bleu value: 36.6 - name: chr-F type: chrf value: 0.62422 - task: name: Translation mlt-eng type: translation args: mlt-eng dataset: name: flores101-devtest type: flores_101 args: mlt eng devtest metrics: - name: BLEU type: bleu value: 49.1 - name: chr-F type: chrf value: 0.72390 - task: name: Translation mlt-fra type: translation args: mlt-fra dataset: name: flores101-devtest type: flores_101 args: mlt fra devtest metrics: - name: BLEU type: bleu value: 34.7 - name: chr-F type: chrf value: 0.60840 - task: name: Translation mlt-por type: translation args: mlt-por dataset: name: flores101-devtest type: flores_101 args: mlt por devtest metrics: - name: BLEU type: bleu value: 31.8 - name: chr-F type: chrf value: 0.59863 - task: name: Translation amh-eng type: translation args: amh-eng dataset: name: ntrex128 type: ntrex128 args: amh-eng metrics: - name: BLEU type: bleu value: 15.0 - name: chr-F type: chrf value: 0.42042 - task: name: Translation amh-fra type: translation args: amh-fra dataset: name: ntrex128 type: ntrex128 args: amh-fra metrics: - name: BLEU type: bleu value: 10.8 - name: chr-F type: chrf value: 0.37274 - task: name: Translation amh-spa type: translation args: amh-spa dataset: name: ntrex128 type: ntrex128 args: amh-spa metrics: - name: BLEU type: bleu value: 12.1 - name: chr-F type: chrf value: 0.38306 - task: name: Translation hau-eng type: translation args: hau-eng dataset: name: ntrex128 type: ntrex128 args: hau-eng metrics: - name: BLEU type: bleu value: 26.1 - name: chr-F type: chrf value: 0.50349 - task: name: Translation hau-fra type: translation args: hau-fra dataset: name: ntrex128 type: ntrex128 args: hau-fra metrics: - name: BLEU type: bleu value: 15.8 - name: chr-F type: chrf value: 0.41837 - task: name: Translation hau-por type: translation args: hau-por dataset: name: ntrex128 type: ntrex128 args: hau-por metrics: - name: BLEU type: bleu value: 15.3 - name: chr-F type: chrf value: 0.40851 - task: name: Translation hau-spa type: translation args: hau-spa dataset: name: ntrex128 type: ntrex128 args: hau-spa metrics: - name: BLEU type: bleu value: 18.5 - name: chr-F type: chrf value: 0.43376 - task: name: Translation heb-deu type: translation args: heb-deu dataset: name: ntrex128 type: ntrex128 args: heb-deu metrics: - name: BLEU type: bleu value: 17.7 - name: chr-F type: chrf value: 0.49482 - task: name: Translation heb-eng type: translation args: heb-eng dataset: name: ntrex128 type: ntrex128 args: heb-eng metrics: - name: BLEU type: bleu value: 31.3 - name: chr-F type: chrf value: 0.59241 - task: name: Translation heb-fra type: translation args: heb-fra dataset: name: ntrex128 type: ntrex128 args: heb-fra metrics: - name: BLEU type: bleu value: 24.0 - name: chr-F type: chrf value: 0.52180 - task: name: Translation heb-por type: translation args: heb-por dataset: name: ntrex128 type: ntrex128 args: heb-por metrics: - name: BLEU type: bleu value: 23.2 - name: chr-F type: chrf value: 0.51248 - task: name: Translation mlt-spa type: translation args: mlt-spa dataset: name: ntrex128 type: ntrex128 args: mlt-spa metrics: - name: BLEU type: bleu value: 30.9 - name: chr-F type: chrf value: 0.57078 - task: name: Translation som-deu type: translation args: som-deu dataset: name: ntrex128 type: ntrex128 args: som-deu metrics: - name: BLEU type: bleu value: 11.0 - name: chr-F type: chrf value: 0.39394 - task: name: Translation som-eng type: translation args: som-eng dataset: name: ntrex128 type: ntrex128 args: som-eng metrics: - name: BLEU type: bleu value: 24.3 - name: chr-F type: chrf value: 0.49187 - task: name: Translation som-fra type: translation args: som-fra dataset: name: ntrex128 type: ntrex128 args: som-fra metrics: - name: BLEU type: bleu value: 15.1 - name: chr-F type: chrf value: 0.41236 - task: name: Translation som-por type: translation args: som-por dataset: name: ntrex128 type: ntrex128 args: som-por metrics: - name: BLEU type: bleu value: 15.2 - name: chr-F type: chrf value: 0.41550 - task: name: Translation som-spa type: translation args: som-spa dataset: name: ntrex128 type: ntrex128 args: som-spa metrics: - name: BLEU type: bleu value: 17.6 - name: chr-F type: chrf value: 0.43278 - task: name: Translation tir-eng type: translation args: tir-eng dataset: name: ntrex128 type: ntrex128 args: tir-eng metrics: - name: BLEU type: bleu value: 12.5 - name: chr-F type: chrf value: 0.37185 - task: name: Translation tir-spa type: translation args: tir-spa dataset: name: ntrex128 type: ntrex128 args: tir-spa metrics: - name: BLEU type: bleu value: 10.2 - name: chr-F type: chrf value: 0.34985 - task: name: Translation amh-eng type: translation args: amh-eng dataset: name: tatoeba-test-v2021-03-30 type: tatoeba_mt args: amh-eng metrics: - name: BLEU type: bleu value: 59.5 - name: chr-F type: chrf value: 0.70202 - task: name: Translation ara-deu type: translation args: ara-deu dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: ara-deu metrics: - name: BLEU type: bleu value: 41.7 - name: chr-F type: chrf value: 0.61039 - task: name: Translation ara-eng type: translation args: ara-eng dataset: name: tatoeba-test-v2020-07-28 type: tatoeba_mt args: ara-eng metrics: - name: BLEU type: bleu value: 42.8 - name: chr-F type: chrf value: 0.60134 - task: name: Translation ara-fra type: translation args: ara-fra dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: ara-fra metrics: - name: BLEU type: bleu value: 38.8 - name: chr-F type: chrf value: 0.56120 - task: name: Translation ara-spa type: translation args: ara-spa dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: ara-spa metrics: - name: BLEU type: bleu value: 43.7 - name: chr-F type: chrf value: 0.62567 - task: name: Translation heb-deu type: translation args: heb-deu dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: heb-deu metrics: - name: BLEU type: bleu value: 42.4 - name: chr-F type: chrf value: 0.63131 - task: name: Translation heb-eng type: translation args: heb-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: heb-eng metrics: - name: BLEU type: bleu value: 49.2 - name: chr-F type: chrf value: 0.64960 - task: name: Translation heb-fra type: translation args: heb-fra dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: heb-fra metrics: - name: BLEU type: bleu value: 46.3 - name: chr-F type: chrf value: 0.64348 - task: name: Translation heb-por type: translation args: heb-por dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: heb-por metrics: - name: BLEU type: bleu value: 43.2 - name: chr-F type: chrf value: 0.63350 - task: name: Translation kab-eng type: translation args: kab-eng dataset: name: tatoeba-test-v2021-03-30 type: tatoeba_mt args: kab-eng metrics: - name: BLEU type: bleu value: 27.8 - name: chr-F type: chrf value: 0.44646 - task: name: Translation kab-spa type: translation args: kab-spa dataset: name: tatoeba-test-v2020-07-28 type: tatoeba_mt args: kab-spa metrics: - name: BLEU type: bleu value: 23.4 - name: chr-F type: chrf value: 0.41526 - task: name: Translation mlt-eng type: translation args: mlt-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: mlt-eng metrics: - name: BLEU type: bleu value: 51.0 - name: chr-F type: chrf value: 0.66653 - task: name: Translation multi-multi type: translation args: multi-multi dataset: name: tatoeba-test-v2020-07-28-v2023-09-26 type: tatoeba_mt args: multi-multi metrics: - name: BLEU type: bleu value: 34.0 - name: chr-F type: chrf value: 0.51369 - task: name: Translation hau-eng type: translation args: hau-eng dataset: name: newstest2021 type: wmt-2021-news args: hau-eng metrics: - name: BLEU type: bleu value: 15.5 - name: chr-F type: chrf value: 0.43744 --- # opus-mt-tc-bible-big-afa-deu_eng_fra_por_spa ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [Acknowledgements](#acknowledgements) ## Model Details Neural machine translation model for translating from Afro-Asiatic languages (afa) to unknown (deu+eng+fra+por+spa). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2024-05-29 - **License:** Apache-2.0 - **Language(s):** - Source Language(s): aar acm afb amh apc ara arc arq arz bcw byn cop daa dsh gde gnd hau hbo heb hig irk jpa kab ker kqp ktb kxc lln lme meq mfh mfi mfk mif mlt mpg mqb muy oar orm pbi phn rif sgw shi shy som sur syc syr taq thv tig tir tmc tmh tmr ttr tzm wal xed zgh - Target Language(s): deu eng fra por spa - Valid Target Language Labels: >>deu<< >>eng<< >>fra<< >>por<< >>spa<< >>xxx<< - **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.zip) - **Resources for more information:** - [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/afa-deu%2Beng%2Bfra%2Bpor%2Bspa/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-29) - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/) - [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1) - [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/) This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>deu<<` ## Uses This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>eng<< Anta i ak-d-yennan ur yerbiḥ ara Tom?", ">>fra<< Iselman d aɣbalu axatar i wučči n yemdanen." ] model_name = "pytorch-models/opus-mt-tc-bible-big-afa-deu_eng_fra_por_spa" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Who told you that he didn't? # L'eau est une source importante de nourriture pour les gens. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-afa-deu_eng_fra_por_spa") print(pipe(">>eng<< Anta i ak-d-yennan ur yerbiḥ ara Tom?")) # expected output: Who told you that he didn't? ``` ## Training - **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Evaluation * [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/afa-deu%2Beng%2Bfra%2Bpor%2Bspa/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-29) * test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt) * test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | ara-deu | tatoeba-test-v2021-08-07 | 0.61039 | 41.7 | 1209 | 8371 | | ara-eng | tatoeba-test-v2021-08-07 | 5.430 | 0.0 | 10305 | 76975 | | ara-fra | tatoeba-test-v2021-08-07 | 0.56120 | 38.8 | 1569 | 11066 | | ara-spa | tatoeba-test-v2021-08-07 | 0.62567 | 43.7 | 1511 | 9708 | | heb-deu | tatoeba-test-v2021-08-07 | 0.63131 | 42.4 | 3090 | 25101 | | heb-eng | tatoeba-test-v2021-08-07 | 0.64960 | 49.2 | 10519 | 77427 | | heb-fra | tatoeba-test-v2021-08-07 | 0.64348 | 46.3 | 3281 | 26123 | | heb-por | tatoeba-test-v2021-08-07 | 0.63350 | 43.2 | 719 | 5335 | | mlt-eng | tatoeba-test-v2021-08-07 | 0.66653 | 51.0 | 203 | 1165 | | amh-eng | flores101-devtest | 0.47357 | 21.0 | 1012 | 24721 | | amh-fra | flores101-devtest | 0.43155 | 16.2 | 1012 | 28343 | | amh-por | flores101-devtest | 0.42109 | 15.1 | 1012 | 26519 | | ara-deu | flores101-devtest | 0.51110 | 20.4 | 1012 | 25094 | | ara-fra | flores101-devtest | 0.56934 | 29.7 | 1012 | 28343 | | ara-por | flores101-devtest | 0.55727 | 28.2 | 1012 | 26519 | | ara-spa | flores101-devtest | 0.48350 | 19.5 | 1012 | 29199 | | hau-eng | flores101-devtest | 0.46804 | 21.6 | 1012 | 24721 | | hau-fra | flores101-devtest | 0.41827 | 15.9 | 1012 | 28343 | | heb-eng | flores101-devtest | 0.62422 | 36.6 | 1012 | 24721 | | mlt-eng | flores101-devtest | 0.72390 | 49.1 | 1012 | 24721 | | mlt-fra | flores101-devtest | 0.60840 | 34.7 | 1012 | 28343 | | mlt-por | flores101-devtest | 0.59863 | 31.8 | 1012 | 26519 | | acm-deu | flores200-devtest | 0.48947 | 17.6 | 1012 | 25094 | | acm-eng | flores200-devtest | 0.56799 | 28.5 | 1012 | 24721 | | acm-fra | flores200-devtest | 0.53577 | 26.1 | 1012 | 28343 | | acm-por | flores200-devtest | 0.52441 | 23.9 | 1012 | 26519 | | acm-spa | flores200-devtest | 0.46985 | 18.2 | 1012 | 29199 | | amh-deu | flores200-devtest | 0.41553 | 12.6 | 1012 | 25094 | | amh-eng | flores200-devtest | 0.49333 | 22.5 | 1012 | 24721 | | amh-fra | flores200-devtest | 0.44890 | 17.8 | 1012 | 28343 | | amh-por | flores200-devtest | 0.43771 | 16.5 | 1012 | 26519 | | apc-deu | flores200-devtest | 0.47480 | 16.0 | 1012 | 25094 | | apc-eng | flores200-devtest | 0.56075 | 28.1 | 1012 | 24721 | | apc-fra | flores200-devtest | 0.52325 | 24.6 | 1012 | 28343 | | apc-por | flores200-devtest | 0.51055 | 22.9 | 1012 | 26519 | | apc-spa | flores200-devtest | 0.45634 | 17.2 | 1012 | 29199 | | arz-deu | flores200-devtest | 0.45844 | 14.1 | 1012 | 25094 | | arz-eng | flores200-devtest | 0.52534 | 22.7 | 1012 | 24721 | | arz-fra | flores200-devtest | 0.50336 | 21.8 | 1012 | 28343 | | arz-por | flores200-devtest | 0.48741 | 20.0 | 1012 | 26519 | | arz-spa | flores200-devtest | 0.44516 | 15.8 | 1012 | 29199 | | hau-eng | flores200-devtest | 0.48137 | 23.4 | 1012 | 24721 | | hau-fra | flores200-devtest | 0.42981 | 17.2 | 1012 | 28343 | | hau-por | flores200-devtest | 0.41385 | 15.7 | 1012 | 26519 | | heb-deu | flores200-devtest | 0.53482 | 22.8 | 1012 | 25094 | | heb-eng | flores200-devtest | 0.63368 | 38.0 | 1012 | 24721 | | heb-fra | flores200-devtest | 0.58417 | 32.6 | 1012 | 28343 | | heb-por | flores200-devtest | 0.57140 | 30.7 | 1012 | 26519 | | mlt-eng | flores200-devtest | 0.73415 | 51.1 | 1012 | 24721 | | mlt-fra | flores200-devtest | 0.61626 | 35.8 | 1012 | 28343 | | mlt-spa | flores200-devtest | 0.50534 | 21.8 | 1012 | 29199 | | som-eng | flores200-devtest | 0.42764 | 17.7 | 1012 | 24721 | | tir-por | flores200-devtest | 2.931 | 0.0 | 1012 | 26519 | | hau-eng | newstest2021 | 0.43744 | 15.5 | 997 | 27372 | | amh-eng | ntrex128 | 0.42042 | 15.0 | 1997 | 47673 | | hau-eng | ntrex128 | 0.50349 | 26.1 | 1997 | 47673 | | hau-fra | ntrex128 | 0.41837 | 15.8 | 1997 | 53481 | | hau-por | ntrex128 | 0.40851 | 15.3 | 1997 | 51631 | | hau-spa | ntrex128 | 0.43376 | 18.5 | 1997 | 54107 | | heb-deu | ntrex128 | 0.49482 | 17.7 | 1997 | 48761 | | heb-eng | ntrex128 | 0.59241 | 31.3 | 1997 | 47673 | | heb-fra | ntrex128 | 0.52180 | 24.0 | 1997 | 53481 | | heb-por | ntrex128 | 0.51248 | 23.2 | 1997 | 51631 | | mlt-spa | ntrex128 | 0.57078 | 30.9 | 1997 | 54107 | | som-eng | ntrex128 | 0.49187 | 24.3 | 1997 | 47673 | | som-fra | ntrex128 | 0.41236 | 15.1 | 1997 | 53481 | | som-por | ntrex128 | 0.41550 | 15.2 | 1997 | 51631 | | som-spa | ntrex128 | 0.43278 | 17.6 | 1997 | 54107 | | tir-eng | tico19-test | 2.655 | 0.0 | 2100 | 56824 | ## Citation Information * Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ```bibtex @article{tiedemann2023democratizing, title={Democratizing neural machine translation with {OPUS-MT}}, author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami}, journal={Language Resources and Evaluation}, number={58}, pages={713--755}, year={2023}, publisher={Springer Nature}, issn={1574-0218}, doi={10.1007/s10579-023-09704-w} } @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Acknowledgements The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/). ## Model conversion info * transformers version: 4.45.1 * OPUS-MT git hash: a0ea3b3 * port time: Mon Oct 7 17:08:30 EEST 2024 * port machine: LM0-400-22516.local
speakleash/Bielik-11B-v2.0-Instruct-GPTQ
speakleash
2024-10-07T14:07:43Z
75
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "finetuned", "gguf", "conversational", "pl", "base_model:speakleash/Bielik-11B-v2.0-Instruct", "base_model:quantized:speakleash/Bielik-11B-v2.0-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2024-09-06T08:26:09Z
--- language: - pl license: apache-2.0 library_name: transformers tags: - finetuned - gguf inference: false pipeline_tag: text-generation base_model: speakleash/Bielik-11B-v2.0-Instruct --- <p align="center"> <img src="https://huggingface.co/speakleash/Bielik-11B-v2/raw/main/speakleash_cyfronet.png"> </p> # Bielik-11B-v2.0-Instruct-GPTQ This repo contains GPTQ format model files for [SpeakLeash](https://speakleash.org/)'s [Bielik-11B-v.2.0-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.0-Instruct). <b><u>DISCLAIMER: Be aware that quantised models show reduced response quality and possible hallucinations!</u></b><br> ### Model description: * **Developed by:** [SpeakLeash](https://speakleash.org/) & [ACK Cyfronet AGH](https://www.cyfronet.pl/) * **Language:** Polish * **Model type:** causal decoder-only * **Quant from:** [Bielik-11B-v2.0-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.0-Instruct) * **Finetuned from:** [Bielik-11B-v2](https://huggingface.co/speakleash/Bielik-11B-v2) * **License:** Apache 2.0 and [Terms of Use](https://bielik.ai/terms/) ### Responsible for model quantization * [Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/)<sup>SpeakLeash</sup> - team leadership, conceptualizing, calibration data preparation, process creation and quantized model delivery. ## Contact Us If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/pv4brQMDTy).
kip1976/vit-demo
kip1976
2024-10-07T14:07:22Z
163
0
transformers
[ "transformers", "tf", "safetensors", "vit", "image-classification", "generated_from_keras_callback", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-10-07T13:46:59Z
--- base_model: google/vit-base-patch16-224 library_name: transformers license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: vit-demo results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # vit-demo This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.44.2 - TensorFlow 2.17.0 - Datasets 3.0.1 - Tokenizers 0.19.1
speakleash/Bielik-11B-v2.2-Instruct-AWQ
speakleash
2024-10-07T14:04:28Z
81
3
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "finetuned", "gguf", "conversational", "pl", "base_model:speakleash/Bielik-11B-v2.2-Instruct", "base_model:quantized:speakleash/Bielik-11B-v2.2-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
2024-08-26T09:28:52Z
--- language: - pl license: apache-2.0 library_name: transformers tags: - finetuned - gguf inference: false pipeline_tag: text-generation base_model: speakleash/Bielik-11B-v2.2-Instruct --- <p align="center"> <img src="https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1/raw/main/speakleash_cyfronet.png"> </p> # Bielik-11B-v2.2-Instruct-AWQ This repo contains AWQ format model files for [SpeakLeash](https://speakleash.org/)'s [Bielik-11B-v.2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct). <b><u>DISCLAIMER: Be aware that quantised models show reduced response quality and possible hallucinations!</u></b><br> ### Model description: * **Developed by:** [SpeakLeash](https://speakleash.org/) & [ACK Cyfronet AGH](https://www.cyfronet.pl/) * **Language:** Polish * **Model type:** causal decoder-only * **Quant from:** [Bielik-11B-v2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct) * **Finetuned from:** [Bielik-11B-v2](https://huggingface.co/speakleash/Bielik-11B-v2) * **License:** Apache 2.0 and [Terms of Use](https://bielik.ai/terms/) ### Responsible for model quantization * [Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/)<sup>SpeakLeash</sup> - team leadership, conceptualizing, calibration data preparation, process creation and quantized model delivery. ## Contact Us If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/CPBxPce4).
speakleash/Bielik-11B-v2.2-Instruct-Quanto-8bit
speakleash
2024-10-07T14:03:17Z
16
3
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "finetuned", "gguf", "8bit", "conversational", "pl", "base_model:speakleash/Bielik-11B-v2.2-Instruct", "base_model:finetune:speakleash/Bielik-11B-v2.2-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "8-bit", "region:us" ]
text-generation
2024-08-26T10:41:21Z
--- language: - pl license: apache-2.0 library_name: transformers tags: - finetuned - gguf - 8bit inference: false pipeline_tag: text-generation base_model: speakleash/Bielik-11B-v2.2-Instruct --- <p align="center"> <img src="https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1-GGUF/raw/main/speakleash_cyfronet.png"> </p> # Bielik-11B-v2.2-Instruct-Quanto-8bit This model was converted to Quanto format from [SpeakLeash](https://speakleash.org/)'s [Bielik-11B-v.2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct). **DISCLAIMER: Be aware that quantised models show reduced response quality and possible hallucinations!** ## About Quanto Optimum Quanto is a pytorch quantization backend for optimum. Model can be loaded using: ``` from optimum.quanto import QuantizedModelForCausalLM qmodel = QuantizedModelForCausalLM.from_pretrained('speakleash/Bielik-11B-v2.2-Instruct-Quanto-8bit') ``` ### Model description: * **Developed by:** [SpeakLeash](https://speakleash.org/) & [ACK Cyfronet AGH](https://www.cyfronet.pl/) * **Language:** Polish * **Model type:** causal decoder-only * **Quant from:** [Bielik-11B-v2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct) * **Finetuned from:** [Bielik-11B-v2](https://huggingface.co/speakleash/Bielik-11B-v2) * **License:** Apache 2.0 and [Terms of Use](https://bielik.ai/terms/) ### Responsible for model quantization * [Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/)<sup>SpeakLeash</sup> - team leadership, conceptualizing, calibration data preparation, process creation and quantized model delivery. ## Contact Us If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/CPBxPce4).
speakleash/Bielik-11B-v2.2-Instruct-EXL2-4.5bit
speakleash
2024-10-07T14:02:41Z
11
3
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "finetuned", "gguf", "conversational", "pl", "base_model:speakleash/Bielik-11B-v2.2-Instruct", "base_model:finetune:speakleash/Bielik-11B-v2.2-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-08-26T11:40:05Z
--- language: - pl license: apache-2.0 library_name: transformers tags: - finetuned - gguf inference: false pipeline_tag: text-generation base_model: speakleash/Bielik-11B-v2.2-Instruct --- <p align="center"> <img src="https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1/raw/main/speakleash_cyfronet.png"> </p> # Bielik-11B-v2.2-Instruct-EXL2-4.5bit This repo contains EXL2 (4.5-bit) format model files for [SpeakLeash](https://speakleash.org/)'s [Bielik-11B-v.2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct). <b><u>DISCLAIMER: Be aware that quantised models show reduced response quality and possible hallucinations!</u></b><br> ### Model description: * **Developed by:** [SpeakLeash](https://speakleash.org/) & [ACK Cyfronet AGH](https://www.cyfronet.pl/) * **Language:** Polish * **Model type:** causal decoder-only * **Quant from:** [Bielik-11B-v2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct) * **Finetuned from:** [Bielik-11B-v2](https://huggingface.co/speakleash/Bielik-11B-v2) * **License:** Apache 2.0 and [Terms of Use](https://bielik.ai/terms/) ### Responsible for model quantization * [Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/)<sup>SpeakLeash</sup> - team leadership, conceptualizing, calibration data preparation, process creation and quantized model delivery. ## Contact Us If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/CPBxPce4).
speakleash/Bielik-11B-v2.2-Instruct-EXL2-6.5bit
speakleash
2024-10-07T14:02:24Z
6
3
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "finetuned", "gguf", "conversational", "pl", "base_model:speakleash/Bielik-11B-v2.2-Instruct", "base_model:finetune:speakleash/Bielik-11B-v2.2-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-08-26T12:42:15Z
--- language: - pl license: apache-2.0 library_name: transformers tags: - finetuned - gguf inference: false pipeline_tag: text-generation base_model: speakleash/Bielik-11B-v2.2-Instruct --- <p align="center"> <img src="https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1/raw/main/speakleash_cyfronet.png"> </p> # Bielik-11B-v2.2-Instruct-EXL2-6.5bit This repo contains EXL2 (6.5-bit) format model files for [SpeakLeash](https://speakleash.org/)'s [Bielik-11B-v.2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct). <b><u>DISCLAIMER: Be aware that quantised models show reduced response quality and possible hallucinations!</u></b><br> ### Model description: * **Developed by:** [SpeakLeash](https://speakleash.org/) & [ACK Cyfronet AGH](https://www.cyfronet.pl/) * **Language:** Polish * **Model type:** causal decoder-only * **Quant from:** [Bielik-11B-v2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct) * **Finetuned from:** [Bielik-11B-v2](https://huggingface.co/speakleash/Bielik-11B-v2) * **License:** Apache 2.0 and [Terms of Use](https://bielik.ai/terms/) ### Responsible for model quantization * [Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/)<sup>SpeakLeash</sup> - team leadership, conceptualizing, calibration data preparation, process creation and quantized model delivery. ## Contact Us If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/CPBxPce4).
speakleash/Bielik-11B-v2.3-Instruct-4bit-ov
speakleash
2024-10-07T13:59:33Z
111
1
transformers
[ "transformers", "openvino", "mistral", "text-generation", "finetuned", "gguf", "conversational", "pl", "base_model:speakleash/Bielik-11B-v2.3-Instruct", "base_model:finetune:speakleash/Bielik-11B-v2.3-Instruct", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2024-10-04T08:14:21Z
--- language: - pl license: apache-2.0 library_name: transformers tags: - finetuned - gguf inference: false pipeline_tag: text-generation base_model: speakleash/Bielik-11B-v2.3-Instruct --- <p align="center"> <img src="https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1/raw/main/speakleash_cyfronet.png"> </p> # Bielik-11B-v2.3-Instruct-GPTQ This repo contains OpenVino 4bit format model files for [SpeakLeash](https://speakleash.org/)'s [Bielik-11B-v.2.3-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.3-Instruct). <b><u>DISCLAIMER: Be aware that quantised models show reduced response quality and possible hallucinations!</u></b><br> ### Model usage with OpenVino This model can be deployed efficiently using the [OpenVino](https://docs.openvino.ai/2024/index.html). Below you can find two ways of model inference: using Intel Optimum, pure OpenVino library. The most simple LLM inferencing code with OpenVINO and the optimum-intel library. ```python from optimum.intel import OVModelForCausalLM from transformers import AutoTokenizer model_id = "speakleash/Bielik-11B-v2.3-Instruct-4bit-ov" model = OVModelForCausalLM.from_pretrained(model_id, use_cache=False) question = "Dlaczego ryby nie potrafią fruwać?" prompt_text_bielik = f"""<s><|im_start|> system Odpowiadaj krótko, precyzyjnie i wyłącznie w języku polskim.<|im_end|> <|im_start|> user {question}<|im_end|> <|im_start|> assistant """ tokenizer = AutoTokenizer.from_pretrained(model_id) inputs = tokenizer(prompt_text_bielik, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=500) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Run an LLM model with only OpenVINO (additionaly we provided code which uses 'greedy decoding' instead of sampling). ```python import openvino as ov import numpy as np from transformers import AutoTokenizer model_path = "speakleash/Bielik-11B-v2.3-Instruct-4bit-ov/openvino_model.xml" tokenizer = AutoTokenizer.from_pretrained("speakleash/Bielik-11B-v2.3-Instruct-4bit-ov") ov_model = ov.Core().read_model(model_path) compiled_model = ov.compile_model(ov_model, "CPU") infer_request = compiled_model.create_infer_request() question = "Dlaczego ryby nie potrafią fruwać?" prompt_text_bielik = f"""<s><|im_start|> system Odpowiadaj krótko, precyzyjnie i wyłącznie w języku polskim.<|im_end|> <|im_start|> user {question}<|im_end|> <|im_start|> assistant """ tokens = tokenizer.encode(prompt_text_bielik, return_tensors="np") input_ids = tokens attention_mask = np.ones_like(input_ids) position_ids = np.arange(len(tokens[0])).reshape(1, -1) beam_idx = np.array([0], dtype=np.int32) infer_request.reset_state() prev_output = '' generated_text_ids = np.array([], dtype=np.int32) num_max_token_for_generation = 500 print(f'Pytanie: {question}') print("Odpowiedź:", end=' ', flush=True) for _ in range(num_max_token_for_generation): response = infer_request.infer(inputs={ 'input_ids': input_ids, 'attention_mask': attention_mask, 'position_ids': position_ids, 'beam_idx': beam_idx }) next_token_logits = response['logits'][0, -1, :] sampled_id = np.argmax(next_token_logits) # Greedy decoding generated_text_ids = np.append(generated_text_ids, sampled_id) output_text = tokenizer.decode(generated_text_ids) print(output_text[len(prev_output):], end='', flush=True) prev_output = output_text input_ids = np.array([[sampled_id]], dtype=np.int64) attention_mask = np.array([[1]], dtype=np.int64) position_ids = np.array([[position_ids[0, -1] + 1]], dtype=np.int64) if sampled_id == tokenizer.eos_token_id: print('\n\n*** Zakończono generowanie.') break print(f'\n\n*** Wygenerowano {len(generated_text_ids)} tokenów.') ``` ### Model description: * **Developed by:** [SpeakLeash](https://speakleash.org/) & [ACK Cyfronet AGH](https://www.cyfronet.pl/) * **Language:** Polish * **Model type:** causal decoder-only * **Quant from:** [Bielik-11B-v2.3-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.3-Instruct) * **Finetuned from:** [Bielik-11B-v2](https://huggingface.co/speakleash/Bielik-11B-v2) * **License:** Apache 2.0 and [Terms of Use](https://bielik.ai/terms/) ### Responsible for model quantization * [Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/)<sup>SpeakLeash</sup> - team leadership, conceptualizing, calibration data preparation, process creation and quantized model delivery. ## Contact Us If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/CPBxPce4).
pui8838/finetuning-emotion-model
pui8838
2024-10-07T13:54:43Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-07T13:34:10Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: finetuning-emotion-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-emotion-model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2138 - Accuracy: 0.9275 - F1: 0.9275 - Precision: 0.9276 - Recall: 0.9275 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 250 | 0.3133 | 0.911 | 0.9104 | 0.9104 | 0.911 | | 0.5359 | 2.0 | 500 | 0.2138 | 0.9275 | 0.9275 | 0.9276 | 0.9275 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.0
Web3Liber/vit-base-oxford-iiit-pets
Web3Liber
2024-10-07T13:46:08Z
193
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-10-07T12:59:56Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - image-classification - generated_from_trainer metrics: - accuracy model-index: - name: vit-base-oxford-iiit-pets results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2046 - Accuracy: 0.9337 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3652 | 1.0 | 370 | 0.3105 | 0.9269 | | 0.2061 | 2.0 | 740 | 0.2322 | 0.9364 | | 0.167 | 3.0 | 1110 | 0.2135 | 0.9337 | | 0.1584 | 4.0 | 1480 | 0.2093 | 0.9337 | | 0.131 | 5.0 | 1850 | 0.2069 | 0.9337 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
NotASI/FineTome-Llama3.2-3B-1002
NotASI
2024-10-07T13:44:27Z
10
1
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "llama-3", "trl", "sft", "conversational", "en", "dataset:mlabonne/FineTome-100k", "base_model:unsloth/Llama-3.2-3B-Instruct-bnb-4bit", "base_model:finetune:unsloth/Llama-3.2-3B-Instruct-bnb-4bit", "license:llama3.2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-10-04T12:11:48Z
--- language: - en license: llama3.2 tags: - text-generation-inference - transformers - unsloth - llama - llama-3 - trl - sft base_model: unsloth/Llama-3.2-3B-Instruct-bnb-4bit datasets: - mlabonne/FineTome-100k model-index: - name: FineTome-Llama3.2-3B-1002 results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 54.74 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-Llama3.2-3B-1002 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 19.52 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-Llama3.2-3B-1002 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 5.29 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-Llama3.2-3B-1002 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 0.11 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-Llama3.2-3B-1002 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 3.96 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-Llama3.2-3B-1002 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 15.96 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-Llama3.2-3B-1002 name: Open LLM Leaderboard --- # IMPORTANT In case you got the following error: ``` exception: data did not match any variant of untagged enum modelwrapper at line 1251003 column 3 ``` Please upgrade your **transformer** package, that is, use the following code: ``` pip install --upgrade "transformers>=4.45" ``` # Uploaded model - **Developed by:** NotASI - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct-bnb-4bit # Details This model was trained on **mlabonne/FineTome-100k** for *2* epochs with **rslora** + **qlora**, and achieve the final training loss: *0.596400*. This model follows the same chat template as the base model one. This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NotASI__FineTome-Llama3.2-3B-1002) | Metric |Value| |-------------------|----:| |Avg. |16.60| |IFEval (0-Shot) |54.74| |BBH (3-Shot) |19.52| |MATH Lvl 5 (4-Shot)| 5.29| |GPQA (0-shot) | 0.11| |MuSR (0-shot) | 3.96| |MMLU-PRO (5-shot) |15.96|
jhkim12/DLthon_BERT_3
jhkim12
2024-10-07T13:42:31Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:beomi/kcbert-base", "base_model:finetune:beomi/kcbert-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-07T13:13:41Z
--- library_name: transformers license: apache-2.0 base_model: beomi/kcbert-base tags: - generated_from_trainer metrics: - f1 model-index: - name: DLthon_BERT_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DLthon_BERT_3 This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0562 - F1: 0.9905 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 178 | 0.3384 | 0.8918 | | No log | 2.0 | 356 | 0.2190 | 0.9425 | | 0.4141 | 3.0 | 534 | 0.0874 | 0.9748 | | 0.4141 | 4.0 | 712 | 0.0634 | 0.9858 | | 0.4141 | 5.0 | 890 | 0.0562 | 0.9905 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Tokenizers 0.19.1
OpenBuddy/openbuddy-qwen2.5llamaify-7b-v23.1-200k-Q4_K_M-GGUF
OpenBuddy
2024-10-07T13:39:49Z
5
0
null
[ "gguf", "llamaify", "qwen2.5", "llama-cpp", "gguf-my-repo", "text-generation", "zh", "en", "fr", "de", "ja", "ko", "it", "fi", "base_model:OpenBuddy/openbuddy-qwen2.5llamaify-7b-v23.1-200k", "base_model:quantized:OpenBuddy/openbuddy-qwen2.5llamaify-7b-v23.1-200k", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-10-07T07:19:02Z
--- base_model: OpenBuddy/openbuddy-qwen2.5llamaify-7b-v23.1-200k language: - zh - en - fr - de - ja - ko - it - fi license: apache-2.0 pipeline_tag: text-generation tags: - llamaify - qwen2.5 - llama-cpp - gguf-my-repo model-index: - name: openbuddy-qwen2.5llamaify-7b-v23.1-200k results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 56.73 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=OpenBuddy/openbuddy-qwen2.5llamaify-7b-v23.1-200k name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 36.4 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=OpenBuddy/openbuddy-qwen2.5llamaify-7b-v23.1-200k name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 11.4 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=OpenBuddy/openbuddy-qwen2.5llamaify-7b-v23.1-200k name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 8.61 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=OpenBuddy/openbuddy-qwen2.5llamaify-7b-v23.1-200k name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 13.81 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=OpenBuddy/openbuddy-qwen2.5llamaify-7b-v23.1-200k name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 32.75 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=OpenBuddy/openbuddy-qwen2.5llamaify-7b-v23.1-200k name: Open LLM Leaderboard --- # ff670/openbuddy-qwen2.5llamaify-7b-v23.1-200k-Q4_K_M-GGUF This model was converted to GGUF format from [`OpenBuddy/openbuddy-qwen2.5llamaify-7b-v23.1-200k`](https://huggingface.co/OpenBuddy/openbuddy-qwen2.5llamaify-7b-v23.1-200k) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/OpenBuddy/openbuddy-qwen2.5llamaify-7b-v23.1-200k) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo ff670/openbuddy-qwen2.5llamaify-7b-v23.1-200k-Q4_K_M-GGUF --hf-file openbuddy-qwen2.5llamaify-7b-v23.1-200k-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo ff670/openbuddy-qwen2.5llamaify-7b-v23.1-200k-Q4_K_M-GGUF --hf-file openbuddy-qwen2.5llamaify-7b-v23.1-200k-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo ff670/openbuddy-qwen2.5llamaify-7b-v23.1-200k-Q4_K_M-GGUF --hf-file openbuddy-qwen2.5llamaify-7b-v23.1-200k-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo ff670/openbuddy-qwen2.5llamaify-7b-v23.1-200k-Q4_K_M-GGUF --hf-file openbuddy-qwen2.5llamaify-7b-v23.1-200k-q4_k_m.gguf -c 2048 ```
SidXXD/celeba-19354-group-47_attacked_with_gaussian_noise_image
SidXXD
2024-10-07T13:39:00Z
5
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "custom-diffusion", "base_model:stabilityai/stable-diffusion-2-1-base", "base_model:adapter:stabilityai/stable-diffusion-2-1-base", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-09-25T09:30:39Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1-base instance_prompt: photo of a <v1*> people tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - custom-diffusion inference: true --- # Custom Diffusion - SidXXD/celeba-19354-group-47_attacked_with_gaussian_noise_image These are Custom Diffusion adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on photo of a <v1*> people using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following. For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
RichardErkhov/HuggingFaceTB_-_SmolLM-360M-Instruct-gguf
RichardErkhov
2024-10-07T13:36:04Z
8
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-07T13:26:40Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) SmolLM-360M-Instruct - GGUF - Model creator: https://huggingface.co/HuggingFaceTB/ - Original model: https://huggingface.co/HuggingFaceTB/SmolLM-360M-Instruct/ | Name | Quant method | Size | | ---- | ---- | ---- | | [SmolLM-360M-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-Instruct-gguf/blob/main/SmolLM-360M-Instruct.Q2_K.gguf) | Q2_K | 0.2GB | | [SmolLM-360M-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-Instruct-gguf/blob/main/SmolLM-360M-Instruct.IQ3_XS.gguf) | IQ3_XS | 0.2GB | | [SmolLM-360M-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-Instruct-gguf/blob/main/SmolLM-360M-Instruct.IQ3_S.gguf) | IQ3_S | 0.2GB | | [SmolLM-360M-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-Instruct-gguf/blob/main/SmolLM-360M-Instruct.Q3_K_S.gguf) | Q3_K_S | 0.2GB | | [SmolLM-360M-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-Instruct-gguf/blob/main/SmolLM-360M-Instruct.IQ3_M.gguf) | IQ3_M | 0.21GB | | [SmolLM-360M-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-Instruct-gguf/blob/main/SmolLM-360M-Instruct.Q3_K.gguf) | Q3_K | 0.22GB | | [SmolLM-360M-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-Instruct-gguf/blob/main/SmolLM-360M-Instruct.Q3_K_M.gguf) | Q3_K_M | 0.22GB | | [SmolLM-360M-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-Instruct-gguf/blob/main/SmolLM-360M-Instruct.Q3_K_L.gguf) | Q3_K_L | 0.23GB | | [SmolLM-360M-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-Instruct-gguf/blob/main/SmolLM-360M-Instruct.IQ4_XS.gguf) | IQ4_XS | 0.21GB | | [SmolLM-360M-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-Instruct-gguf/blob/main/SmolLM-360M-Instruct.Q4_0.gguf) | Q4_0 | 0.21GB | | [SmolLM-360M-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-Instruct-gguf/blob/main/SmolLM-360M-Instruct.IQ4_NL.gguf) | IQ4_NL | 0.21GB | | [SmolLM-360M-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-Instruct-gguf/blob/main/SmolLM-360M-Instruct.Q4_K_S.gguf) | Q4_K_S | 0.24GB | | [SmolLM-360M-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-Instruct-gguf/blob/main/SmolLM-360M-Instruct.Q4_K.gguf) | Q4_K | 0.25GB | | [SmolLM-360M-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-Instruct-gguf/blob/main/SmolLM-360M-Instruct.Q4_K_M.gguf) | Q4_K_M | 0.25GB | | [SmolLM-360M-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-Instruct-gguf/blob/main/SmolLM-360M-Instruct.Q4_1.gguf) | Q4_1 | 0.23GB | | [SmolLM-360M-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-Instruct-gguf/blob/main/SmolLM-360M-Instruct.Q5_0.gguf) | Q5_0 | 0.25GB | | [SmolLM-360M-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-Instruct-gguf/blob/main/SmolLM-360M-Instruct.Q5_K_S.gguf) | Q5_K_S | 0.26GB | | [SmolLM-360M-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-Instruct-gguf/blob/main/SmolLM-360M-Instruct.Q5_K.gguf) | Q5_K | 0.27GB | | [SmolLM-360M-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-Instruct-gguf/blob/main/SmolLM-360M-Instruct.Q5_K_M.gguf) | Q5_K_M | 0.27GB | | [SmolLM-360M-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-Instruct-gguf/blob/main/SmolLM-360M-Instruct.Q5_1.gguf) | Q5_1 | 0.27GB | | [SmolLM-360M-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-Instruct-gguf/blob/main/SmolLM-360M-Instruct.Q6_K.gguf) | Q6_K | 0.34GB | | [SmolLM-360M-Instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/HuggingFaceTB_-_SmolLM-360M-Instruct-gguf/blob/main/SmolLM-360M-Instruct.Q8_0.gguf) | Q8_0 | 0.36GB | Original model description: --- license: apache-2.0 base_model: HuggingFaceTB/SmolLM-360M tags: - alignment-handbook - trl - sft datasets: - Magpie-Align/Magpie-Pro-300K-Filtered - bigcode/self-oss-instruct-sc2-exec-filter-50k - teknium/OpenHermes-2.5 - HuggingFaceTB/everyday-conversations-llama3.1-2k library_name: transformers language: - en --- # SmolLM-360M-Instruct <center> <img src="https://huggingface.co/datasets/HuggingFaceTB/images/resolve/main/banner_smol.png" alt="SmolLM" width="1100" height="600"> </center> ## Model Summary Chat with the model at: https://huggingface.co/spaces/HuggingFaceTB/instant-smol SmolLM is a series of language models available in three sizes: 135M, 360M, and 1.7B parameters. These models are trained on [SmolLM-Corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus), a curated collection of high-quality educational and synthetic data designed for training LLMs. For further details, we refer to our [blogpost](https://huggingface.co/blog/smollm). To build SmolLM-Instruct, we finetune the base models on publicly available datasets. ## Changelog |Release|Description| |-|-| |v0.1| Initial release of SmolLM-Instruct. We finetune on the permissive subset of the [WebInstructSub](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub) dataset, combined with [StarCoder2-Self-OSS-Instruct](https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k). Then, we perform DPO (Direct Preference Optimization) for one epoch on [HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer) for the 135M and 1.7B models, and [argilla/dpo-mix-7k](https://huggingface.co/datasets/argilla/dpo-mix-7k) for the 360M model.| |v0.2| We changed the finetuning mix to datasets more suitable for smol models. We train on a new dataset of 2k simple everyday conversations we generated by llama3.1-70B [everyday-conversations-llama3.1-2k](https://huggingface.co/datasets/HuggingFaceTB/everyday-conversations-llama3.1-2k/), [Magpie-Pro-300K-Filtere](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered), [StarCoder2-Self-OSS-Instruct](https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k), and a small subset of [OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5)| v0.2 models are better at staying on topic and responding appropriately to standard prompts, such as greetings and questions about their role as AI assistants. SmolLM-360M-Instruct (v0.2) has a 63.3% win rate over SmolLM-360M-Instruct (v0.1) on AlpacaEval. You can find the details [here](https://huggingface.co/datasets/HuggingFaceTB/alpaca_eval_details/). You can load v0.1 models by specifying `revision="v0.1"` in the transformers code: ```python model = AutoModelForCausalLM.from_pretrained("HuggingFaceTB/SmolLM-360M-Instruct", revision="v0.1") ``` ## Usage ### Local Applications ⚡ For local applications, you can find optimized implementations of the model in MLC, GGUF and Transformers.js formats, in addition to fast in-browser demos in this collection: https://huggingface.co/collections/HuggingFaceTB/local-smollms-66c0f3b2a15b4eed7fb198d0 We noticed that 4bit quantization degrades the quality of the 135M and 360M, so we use `q016` for MLC and ONNX/Transformers.js checkpoints for the WebGPU demos. We also suggest using temperature 0.2 and top-p 0.9. ### Transformers ```bash pip install transformers ``` ```python # pip install transformers from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "HuggingFaceTB/SmolLM-360M-Instruct" device = "cuda" # for GPU usage or "cpu" for CPU usage tokenizer = AutoTokenizer.from_pretrained(checkpoint) # for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")` model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device) messages = [{"role": "user", "content": "What is the capital of France."}] input_text=tokenizer.apply_chat_template(messages, tokenize=False) print(input_text) inputs = tokenizer.encode(input_text, return_tensors="pt").to(device) outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True) print(tokenizer.decode(outputs[0])) ``` ### Chat in TRL You can also use the TRL CLI to chat with the model from the terminal: ```bash pip install trl trl chat --model_name_or_path HuggingFaceTB/SmolLM-360M-Instruct --device cpu ``` ## Limitations Additionally, the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data, we invite users to leverage them as assistive tools rather than definitive sources of information. We find that they can handle general knowledge questions, creative writing and basic Python programming. But they are English only and may have difficulty with arithmetics, editing tasks and complex reasoning. For more details about the models' capabilities, please refer to our [blog post](https://huggingface.co/blog/smollm). ## Training parameters We train the models using the [alignment-handbook](https://github.com/huggingface/alignment-handbook) with the datasets mentioned in the changelog, using these parameters for v0.2 (most of them are from Zephyr Gemma recipe): - 1 epoch - lr 1e-3 - cosine schedule - warmup ratio 0.1 - global batch size 262k tokens You can find the training recipe here: https://github.com/huggingface/alignment-handbook/tree/smollm/recipes/smollm # Citation ```bash @misc{allal2024SmolLM, title={SmolLM - blazingly fast and remarkably powerful}, author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Leandro von Werra and Thomas Wolf}, year={2024}, } ```
huimanho/CustomModel_yelp
huimanho
2024-10-07T13:33:55Z
105
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-07T13:33:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
isom5240/CustomModel_yelp_2024fall_v2
isom5240
2024-10-07T13:33:33Z
105
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-07T13:33:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kitosforos/sd-class-butterflies-32
kitosforos
2024-10-07T13:31:32Z
44
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2024-10-07T13:31:13Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('kitosforos/sd-class-butterflies-32') image = pipeline().images[0] image ```
vhab10/Llama-3-1-8B-Instruct-Unsloth-LoRA-4bit
vhab10
2024-10-07T13:27:11Z
76
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "dataset:ai-maker-space/acronyms_and_initialisms_translated", "base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit", "base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit", "license:mit", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-09-22T15:02:28Z
--- base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit language: - en license: mit tags: - text-generation-inference - transformers - unsloth - llama - trl - sft datasets: - ai-maker-space/acronyms_and_initialisms_translated library_name: transformers --- ## Overview This model is a fine-tuned version of the **Meta LLaMA 3.1 8B Instruct** model. It has been fine-tuned using the [Unsloth library](https://huggingface.co/unsloth) with **4-bit quantization** for efficient inference and deployment. The fine-tuning process utilized a synthetic dataset from **@AI Maker Space** consisting of acronyms and their expanded forms in English and was performed using the LoRA (Low-Rank Adaptation) technique, specifically designed for instruction-based tasks. It can be easily deployed in low-resource environments thanks to the 4-bit quantization. # Uploaded model - **Developed by:** vhab10 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RichardErkhov/deepseek-ai_-_DeepSeek-Coder-V2-Lite-Instruct-gguf
RichardErkhov
2024-10-07T13:24:34Z
13,838
0
null
[ "gguf", "arxiv:2401.06066", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-07T04:53:20Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) DeepSeek-Coder-V2-Lite-Instruct - GGUF - Model creator: https://huggingface.co/deepseek-ai/ - Original model: https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct/ | Name | Quant method | Size | | ---- | ---- | ---- | | [DeepSeek-Coder-V2-Lite-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_DeepSeek-Coder-V2-Lite-Instruct-gguf/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q2_K.gguf) | Q2_K | 5.99GB | | [DeepSeek-Coder-V2-Lite-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_DeepSeek-Coder-V2-Lite-Instruct-gguf/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ3_XS.gguf) | IQ3_XS | 6.63GB | | [DeepSeek-Coder-V2-Lite-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_DeepSeek-Coder-V2-Lite-Instruct-gguf/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ3_S.gguf) | IQ3_S | 6.97GB | | [DeepSeek-Coder-V2-Lite-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_DeepSeek-Coder-V2-Lite-Instruct-gguf/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q3_K_S.gguf) | Q3_K_S | 6.97GB | | [DeepSeek-Coder-V2-Lite-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_DeepSeek-Coder-V2-Lite-Instruct-gguf/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ3_M.gguf) | IQ3_M | 7.03GB | | [DeepSeek-Coder-V2-Lite-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_DeepSeek-Coder-V2-Lite-Instruct-gguf/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q3_K.gguf) | Q3_K | 7.57GB | | [DeepSeek-Coder-V2-Lite-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_DeepSeek-Coder-V2-Lite-Instruct-gguf/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q3_K_M.gguf) | Q3_K_M | 7.57GB | | [DeepSeek-Coder-V2-Lite-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_DeepSeek-Coder-V2-Lite-Instruct-gguf/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q3_K_L.gguf) | Q3_K_L | 7.88GB | | [DeepSeek-Coder-V2-Lite-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_DeepSeek-Coder-V2-Lite-Instruct-gguf/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ4_XS.gguf) | IQ4_XS | 8.05GB | | [DeepSeek-Coder-V2-Lite-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_DeepSeek-Coder-V2-Lite-Instruct-gguf/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q4_0.gguf) | Q4_0 | 8.29GB | | [DeepSeek-Coder-V2-Lite-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_DeepSeek-Coder-V2-Lite-Instruct-gguf/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ4_NL.gguf) | IQ4_NL | 8.36GB | | [DeepSeek-Coder-V2-Lite-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_DeepSeek-Coder-V2-Lite-Instruct-gguf/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q4_K_S.gguf) | Q4_K_S | 8.88GB | | [DeepSeek-Coder-V2-Lite-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_DeepSeek-Coder-V2-Lite-Instruct-gguf/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q4_K.gguf) | Q4_K | 9.65GB | | [DeepSeek-Coder-V2-Lite-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_DeepSeek-Coder-V2-Lite-Instruct-gguf/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q4_K_M.gguf) | Q4_K_M | 9.65GB | | [DeepSeek-Coder-V2-Lite-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_DeepSeek-Coder-V2-Lite-Instruct-gguf/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q4_1.gguf) | Q4_1 | 9.2GB | | [DeepSeek-Coder-V2-Lite-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_DeepSeek-Coder-V2-Lite-Instruct-gguf/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q5_0.gguf) | Q5_0 | 10.1GB | | [DeepSeek-Coder-V2-Lite-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_DeepSeek-Coder-V2-Lite-Instruct-gguf/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q5_K_S.gguf) | Q5_K_S | 10.38GB | | [DeepSeek-Coder-V2-Lite-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_DeepSeek-Coder-V2-Lite-Instruct-gguf/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q5_K.gguf) | Q5_K | 11.04GB | | [DeepSeek-Coder-V2-Lite-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_DeepSeek-Coder-V2-Lite-Instruct-gguf/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q5_K_M.gguf) | Q5_K_M | 11.04GB | | [DeepSeek-Coder-V2-Lite-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_DeepSeek-Coder-V2-Lite-Instruct-gguf/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q5_1.gguf) | Q5_1 | 11.0GB | | [DeepSeek-Coder-V2-Lite-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_DeepSeek-Coder-V2-Lite-Instruct-gguf/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q6_K.gguf) | Q6_K | 13.1GB | | [DeepSeek-Coder-V2-Lite-Instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_DeepSeek-Coder-V2-Lite-Instruct-gguf/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q8_0.gguf) | Q8_0 | 15.56GB | Original model description: --- license: other license_name: deepseek-license license_link: LICENSE --- <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <!-- markdownlint-disable no-duplicate-header --> <div align="center"> <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V2" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20V2-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;"> <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;"> <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;"> <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/LICENSE-CODE" style="margin: 2px;"> <img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/LICENSE-MODEL" style="margin: 2px;"> <img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> <p align="center"> <a href="#4-api-platform">API Platform</a> | <a href="#5-how-to-run-locally">How to Use</a> | <a href="#6-license">License</a> | </p> <p align="center"> <a href="https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/paper.pdf"><b>Paper Link</b>👁️</a> </p> # DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence ## 1. Introduction We present DeepSeek-Coder-V2, an open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained from an intermediate checkpoint of DeepSeek-V2 with additional 6 trillion tokens. Through this continued pre-training, DeepSeek-Coder-V2 substantially enhances the coding and mathematical reasoning capabilities of DeepSeek-V2, while maintaining comparable performance in general language tasks. Compared to DeepSeek-Coder-33B, DeepSeek-Coder-V2 demonstrates significant advancements in various aspects of code-related tasks, as well as reasoning and general capabilities. Additionally, DeepSeek-Coder-V2 expands its support for programming languages from 86 to 338, while extending the context length from 16K to 128K. <p align="center"> <img width="100%" src="https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/figures/performance.png?raw=true"> </p> In standard benchmark evaluations, DeepSeek-Coder-V2 achieves superior performance compared to closed-source models such as GPT4-Turbo, Claude 3 Opus, and Gemini 1.5 Pro in coding and math benchmarks. The list of supported programming languages can be found [here](https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/supported_langs.txt). ## 2. Model Downloads We release the DeepSeek-Coder-V2 with 16B and 236B parameters based on the [DeepSeekMoE](https://arxiv.org/pdf/2401.06066) framework, which has actived parameters of only 2.4B and 21B , including base and instruct models, to the public. <div align="center"> | **Model** | **#Total Params** | **#Active Params** | **Context Length** | **Download** | | :-----------------------------: | :---------------: | :----------------: | :----------------: | :----------------------------------------------------------: | | DeepSeek-Coder-V2-Lite-Base | 16B | 2.4B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Base) | | DeepSeek-Coder-V2-Lite-Instruct | 16B | 2.4B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct) | | DeepSeek-Coder-V2-Base | 236B | 21B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Base) | | DeepSeek-Coder-V2-Instruct | 236B | 21B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Instruct) | </div> ## 3. Chat Website You can chat with the DeepSeek-Coder-V2 on DeepSeek's official website: [coder.deepseek.com](https://coder.deepseek.com/sign_in) ## 4. API Platform We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/), and you can also pay-as-you-go at an unbeatable price. <p align="center"> <img width="40%" src="https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/figures/model_price.jpg?raw=true"> </p> ## 5. How to run locally **Here, we provide some examples of how to use DeepSeek-Coder-V2-Lite model. If you want to utilize DeepSeek-Coder-V2 in BF16 format for inference, 80GB*8 GPUs are required.** ### Inference with Huggingface's Transformers You can directly employ [Huggingface's Transformers](https://github.com/huggingface/transformers) for model inference. #### Code Completion ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda() input_text = "#write a quick sort algorithm" inputs = tokenizer(input_text, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_length=128) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` #### Code Insertion ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda() input_text = """<|fim▁begin|>def quick_sort(arr): if len(arr) <= 1: return arr pivot = arr[0] left = [] right = [] <|fim▁hole|> if arr[i] < pivot: left.append(arr[i]) else: right.append(arr[i]) return quick_sort(left) + [pivot] + quick_sort(right)<|fim▁end|>""" inputs = tokenizer(input_text, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_length=128) print(tokenizer.decode(outputs[0], skip_special_tokens=True)[len(input_text):]) ``` #### Chat Completion ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda() messages=[ { 'role': 'user', 'content': "write a quick sort algorithm in python."} ] inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device) # tokenizer.eos_token_id is the id of <|end▁of▁sentence|> token outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` The complete chat template can be found within `tokenizer_config.json` located in the huggingface model repository. An example of chat template is as belows: ```bash <|begin▁of▁sentence|>User: {user_message_1} Assistant: {assistant_message_1}<|end▁of▁sentence|>User: {user_message_2} Assistant: ``` You can also add an optional system message: ```bash <|begin▁of▁sentence|>{system_message} User: {user_message_1} Assistant: {assistant_message_1}<|end▁of▁sentence|>User: {user_message_2} Assistant: ``` ### Inference with vLLM (recommended) To utilize [vLLM](https://github.com/vllm-project/vllm) for model inference, please merge this Pull Request into your vLLM codebase: https://github.com/vllm-project/vllm/pull/4650. ```python from transformers import AutoTokenizer from vllm import LLM, SamplingParams max_model_len, tp_size = 8192, 1 model_name = "deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_name) llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True, enforce_eager=True) sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id]) messages_list = [ [{"role": "user", "content": "Who are you?"}], [{"role": "user", "content": "write a quick sort algorithm in python."}], [{"role": "user", "content": "Write a piece of quicksort code in C++."}], ] prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list] outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params) generated_text = [output.outputs[0].text for output in outputs] print(generated_text) ``` ## 6. License This code repository is licensed under [the MIT License](https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/LICENSE-CODE). The use of DeepSeek-Coder-V2 Base/Instruct models is subject to [the Model License](https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/LICENSE-MODEL). DeepSeek-Coder-V2 series (including Base and Instruct) supports commercial use. ## 7. Contact If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
csikasote/mms-1b-all-bem-genbed-f-model
csikasote
2024-10-07T13:23:41Z
5
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "genbed", "mms", "generated_from_trainer", "base_model:facebook/mms-1b-all", "base_model:finetune:facebook/mms-1b-all", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-09-26T17:16:54Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: facebook/mms-1b-all tags: - automatic-speech-recognition - genbed - mms - generated_from_trainer metrics: - wer model-index: - name: mms-1b-all-bem-genbed-f-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mms-1b-all-bem-genbed-f-model This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the GENBED - BEM dataset. It achieves the following results on the evaluation set: - Loss: 0.1823 - Wer: 0.3431 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 6.6556 | 0.1370 | 100 | 0.5951 | 0.6643 | | 0.4415 | 0.2740 | 200 | 0.2734 | 0.4565 | | 0.3448 | 0.4110 | 300 | 0.2482 | 0.4289 | | 0.3459 | 0.5479 | 400 | 0.2392 | 0.4149 | | 0.3184 | 0.6849 | 500 | 0.2304 | 0.4085 | | 0.3058 | 0.8219 | 600 | 0.2372 | 0.4108 | | 0.3077 | 0.9589 | 700 | 0.2271 | 0.4172 | | 0.2812 | 1.0959 | 800 | 0.2217 | 0.3983 | | 0.3297 | 1.2329 | 900 | 0.2209 | 0.3984 | | 0.2817 | 1.3699 | 1000 | 0.2163 | 0.4124 | | 0.2927 | 1.5068 | 1100 | 0.2146 | 0.3863 | | 0.2806 | 1.6438 | 1200 | 0.2106 | 0.3851 | | 0.2574 | 1.7808 | 1300 | 0.2098 | 0.3866 | | 0.2829 | 1.9178 | 1400 | 0.2067 | 0.3772 | | 0.2764 | 2.0548 | 1500 | 0.2076 | 0.3789 | | 0.2635 | 2.1918 | 1600 | 0.2076 | 0.3769 | | 0.2761 | 2.3288 | 1700 | 0.2068 | 0.3801 | | 0.2854 | 2.4658 | 1800 | 0.1994 | 0.3645 | | 0.2557 | 2.6027 | 1900 | 0.2016 | 0.3861 | | 0.2717 | 2.7397 | 2000 | 0.2011 | 0.3734 | | 0.2504 | 2.8767 | 2100 | 0.1989 | 0.3674 | | 0.2606 | 3.0137 | 2200 | 0.1990 | 0.3835 | | 0.2583 | 3.1507 | 2300 | 0.2028 | 0.3666 | | 0.2591 | 3.2877 | 2400 | 0.1952 | 0.3507 | | 0.2408 | 3.4247 | 2500 | 0.1988 | 0.3637 | | 0.2485 | 3.5616 | 2600 | 0.1972 | 0.3593 | | 0.2474 | 3.6986 | 2700 | 0.1949 | 0.3534 | | 0.2398 | 3.8356 | 2800 | 0.1959 | 0.3697 | | 0.2512 | 3.9726 | 2900 | 0.1906 | 0.3559 | | 0.2266 | 4.1096 | 3000 | 0.1905 | 0.3482 | | 0.2538 | 4.2466 | 3100 | 0.1916 | 0.3521 | | 0.2268 | 4.3836 | 3200 | 0.1914 | 0.3895 | | 0.2249 | 4.5205 | 3300 | 0.1897 | 0.3417 | | 0.2416 | 4.6575 | 3400 | 0.1877 | 0.3458 | | 0.2421 | 4.7945 | 3500 | 0.1872 | 0.3412 | | 0.244 | 4.9315 | 3600 | 0.1855 | 0.3528 | | 0.2371 | 5.0685 | 3700 | 0.1871 | 0.3447 | | 0.2383 | 5.2055 | 3800 | 0.1833 | 0.3523 | | 0.2409 | 5.3425 | 3900 | 0.1886 | 0.3487 | | 0.2312 | 5.4795 | 4000 | 0.1848 | 0.3438 | | 0.2261 | 5.6164 | 4100 | 0.1866 | 0.3469 | | 0.2169 | 5.7534 | 4200 | 0.1841 | 0.3376 | | 0.2283 | 5.8904 | 4300 | 0.1865 | 0.3412 | | 0.2182 | 6.0274 | 4400 | 0.1823 | 0.3431 | | 0.2141 | 6.1644 | 4500 | 0.1858 | 0.3403 | | 0.2127 | 6.3014 | 4600 | 0.1876 | 0.3356 | | 0.229 | 6.4384 | 4700 | 0.1863 | 0.3361 | ### Framework versions - Transformers 4.46.0.dev0 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.0
NotASI/FineTome-Llama3.2-1B-0929
NotASI
2024-10-07T13:23:18Z
172
1
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "llama-3", "trl", "sft", "conversational", "en", "dataset:mlabonne/FineTome-100k", "base_model:unsloth/Llama-3.2-1B-Instruct-bnb-4bit", "base_model:finetune:unsloth/Llama-3.2-1B-Instruct-bnb-4bit", "license:llama3.2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-09-29T16:29:52Z
--- language: - en license: llama3.2 tags: - text-generation-inference - transformers - unsloth - llama - llama-3 - trl - sft base_model: unsloth/Llama-3.2-1B-Instruct-bnb-4bit datasets: - mlabonne/FineTome-100k model-index: - name: FineTome-Llama3.2-1B-0929 results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 39.91 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-Llama3.2-1B-0929 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 5.74 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-Llama3.2-1B-0929 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 1.28 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-Llama3.2-1B-0929 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 3.02 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-Llama3.2-1B-0929 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 2.66 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-Llama3.2-1B-0929 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 4.76 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-Llama3.2-1B-0929 name: Open LLM Leaderboard --- # Notice **Code + Math** optimized version coming soon! # IMPORTANT In case you got the following error: ``` exception: data did not match any variant of untagged enum modelwrapper at line 1251003 column 3 ``` Please upgrade your **transformer** package, that is, use the following code: ``` pip install --upgrade "transformers>=4.45" ``` # Uploaded model - **Developed by:** NotASI - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct-bnb-4bit # Details This model was trained on **mlabonne/FineTome-100k** for *2* epochs with **rslora** + **qlora**, and achieve the final training loss: *0.796700*. This model follows the same chat template as the base model one. This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) # Usage Sample notebook (Unsloth): https://colab.research.google.com/drive/1cSCvemKsC0JVmMs4jdfoTqM8SQQ_DSBt?usp=sharing **Dependencies** ``` !pip install unsloth !pip install --upgrade "transformers>=4.45" ``` **Code** ```python from unsloth.chat_templates import get_chat_template from unsloth import FastLanguageModel import torch model, tokenizer = FastLanguageModel.from_pretrained( model_name = "NotASI/FineTome-Llama3.2-1B-0929", ) tokenizer = get_chat_template( tokenizer, chat_template = "llama-3.1", ) FastLanguageModel.for_inference(model) # Enable native 2x faster inference messages = [ {"role": "user", "content": "Hi!"}, ] inputs = tokenizer.apply_chat_template( messages, tokenize = True, add_generation_prompt = True, # Must add for generation return_tensors = "pt", ).to("cuda") from transformers import TextStreamer text_streamer = TextStreamer(tokenizer, skip_prompt = True) _ = model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 128, use_cache = True, temperature = 1.5, min_p = 0.1) ``` ## Example 1 **Input**: ``` Continue the fibonnaci sequence: 1, 1, 2, 3, 5, 8. Give me the next 10 numbers. ``` **Output**: ``` Here are the next 10 numbers in the Fibonacci sequence: 1. 13 2. 21 3. 34 4. 55 5. 89 6. 144 7. 233 8. 377 9. 610 10. 987 11. 1597 12. 2584 13. 4190 14. 6765 15. 10946 16. 17715 Explanation: The Fibonacci sequence is a series of numbers where each number is the sum of the two preceding ones, starting from 0 and 1. To calculate the nth Fibonacci number, you can use the formula F(n) = F(n-1) + F(n-2).<|eot_id|> ``` ## Example 2 **Input**: ``` Who are you? Briefy introduce yourself. ``` **Output**: ``` Hi there! I'm an AI assistant designed to help with a range of inquiries. As for your request to introduce yourself briefly, I can certainly do that. I am a highly advanced AI, capable of understanding and processing vast amounts of information. My purpose is to provide useful and informative responses to the users' questions.<|eot_id|> ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NotASI__FineTome-Llama3.2-1B-0929) | Metric |Value| |-------------------|----:| |Avg. | 9.56| |IFEval (0-Shot) |39.91| |BBH (3-Shot) | 5.74| |MATH Lvl 5 (4-Shot)| 1.28| |GPQA (0-shot) | 3.02| |MuSR (0-shot) | 2.66| |MMLU-PRO (5-shot) | 4.76|
QuantFactory/Qwen2.5-0.5b-RBase-GGUF
QuantFactory
2024-10-07T13:21:22Z
82
2
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "qwen2", "trl", "sft", "en", "base_model:Qwen/Qwen2.5-0.5B-Instruct", "base_model:quantized:Qwen/Qwen2.5-0.5B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-07T13:16:36Z
--- base_model: - Qwen/Qwen2.5-0.5B-Instruct language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/Qwen2.5-0.5b-RBase-GGUF This is quantized version of [KingNish/Qwen2.5-0.5b-RBase](https://huggingface.co/KingNish/Qwen2.5-0.5b-RBase) created using llama.cpp # Original Model Card # Qwen 2.5 0.5B Model ## Model Description This model is a compact yet powerful language model trained to answer a variety of questions with impressive quality. Despite its smaller size, it has demonstrated performance comparable to Llama 3.2 1B, and in some cases, it even outperforms it. This model was specifically trained on a 12,800 rows of the Magpie 300k Dataset. ## Performance The Qwen 2.5 model has shown promising results in various tests, including the "strawberry test, Decimal Comparison test" where it successfully provided accurate answers. However, it is important to note that, like many models of its size, it may occasionally produce incorrect answers or flawed reasoning. Continuous improvements and full training are planned to enhance its performance further. ## How to Use To use the Qwen 2.5 model, you can load it using the Hugging Face Transformers library. Here’s a simple example: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "KingNish/Qwen2.5-0.5b-Test-ft" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Which is greater 9.9 or 9.11 ??" messages = [ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` ## Future Work I am actively working on improving the Qwen 2.5 model by training it on a larger dataset. # Uploaded model - **Developed by:** KingNish - **License:** apache-2.0 - **Finetuned from model :** Qwen/Qwen2.5-0.5B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Tombiczek/sentiment_model_deberta_v1
Tombiczek
2024-10-07T13:14:55Z
164
0
transformers
[ "transformers", "safetensors", "deberta-v2", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-07T13:14:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/leap-llm_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter2-gguf
RichardErkhov
2024-10-07T13:13:34Z
9
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-07T09:42:37Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Meta-Llama-3-8B-Instruct-sft-webshop-iter2 - GGUF - Model creator: https://huggingface.co/leap-llm/ - Original model: https://huggingface.co/leap-llm/Meta-Llama-3-8B-Instruct-sft-webshop-iter2/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q2_K.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter2-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q2_K.gguf) | Q2_K | 2.96GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter2-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter2.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter2-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter2.IQ3_S.gguf) | IQ3_S | 3.43GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter2-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter2-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter2.IQ3_M.gguf) | IQ3_M | 3.52GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q3_K.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter2-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q3_K.gguf) | Q3_K | 3.74GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter2-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter2-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter2-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter2.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q4_0.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter2-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q4_0.gguf) | Q4_0 | 4.34GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter2-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter2.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter2-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q4_K.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter2-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q4_K.gguf) | Q4_K | 4.58GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter2-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q4_1.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter2-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q4_1.gguf) | Q4_1 | 4.78GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q5_0.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter2-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q5_0.gguf) | Q5_0 | 5.21GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter2-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q5_K.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter2-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q5_K.gguf) | Q5_K | 5.34GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter2-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q5_1.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter2-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q5_1.gguf) | Q5_1 | 5.65GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q6_K.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter2-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q6_K.gguf) | Q6_K | 6.14GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q8_0.gguf](https://huggingface.co/RichardErkhov/leap-llm_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter2-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter2.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
QuantFactory/TwinLlama-3.1-8B-GGUF
QuantFactory
2024-10-07T13:12:39Z
81
2
transformers
[ "transformers", "gguf", "unsloth", "trl", "sft", "en", "dataset:mlabonne/llmtwin", "base_model:meta-llama/Llama-3.1-8B", "base_model:quantized:meta-llama/Llama-3.1-8B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-10-07T11:45:19Z
--- base_model: meta-llama/Meta-Llama-3.1-8B datasets: - mlabonne/llmtwin language: - en library_name: transformers license: apache-2.0 tags: - unsloth - trl - sft --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/TwinLlama-3.1-8B-GGUF This is quantized version of [mlabonne/TwinLlama-3.1-8B](https://huggingface.co/mlabonne/TwinLlama-3.1-8B) created using llama.cpp # Original Model Card ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/Ddo6O27iJ0uFiGp7Y5py1.png) # 👥 TwinLlama-3.1-8B TwinLlama-3.1-8B is a model created for the [LLM Engineer's Handbook](https://a.co/d/9vYzTUC), trained on [mlabonne/llmtwin](https://huggingface.co/datasets/mlabonne/llmtwin). It is designed to act as a digital twin, which is a clone of myself and my co-authors (Paul Iusztin and Alex Vesa), imitating our writing style and drawing knowledge from our articles. --- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
FrankRin/InsLLM
FrankRin
2024-10-07T13:05:48Z
12
1
null
[ "safetensors", "qwen2", "text-generation", "conversational", "zh", "dataset:FrankRin/Insur-QA", "base_model:Qwen/Qwen1.5-14B-Chat", "base_model:finetune:Qwen/Qwen1.5-14B-Chat", "license:apache-2.0", "region:us" ]
text-generation
2024-10-06T15:12:03Z
--- license: apache-2.0 datasets: - FrankRin/Insur-QA language: - zh base_model: - Qwen/Qwen1.5-14B-Chat pipeline_tag: text-generation --- This repository contains the InsLLM, version of Qwen1.5-14B-Chat as the base model. <div align="center"> <h1>InsQABench</h1> </div> InsQABench is the first large-scale specialized question-answering dataset and evaluation benchmark in the Chinese insurance sector, developed and open-sourced by the VLR Lab (Vision and Learning Representation Group) at Huazhong University of Science and Technology. ## Overview InsLLM is an intelligent insurance system equipped with capabilities for insurance-related question answering, database querying, and contract parsing. Designed for diverse user groups and application scenarios, it offers the following key features: * **Insurance Text Processing:** The system is capable of understanding and generating content related to complex professional terms and document formats specific to the insurance domain. This includes tasks like information extraction and document summarization. We have constructed fine-tuning datasets based on publicly available insurance data and real-world insurance documents. * **Insurance Reasoning:** Leveraging the SQL-ReAct method, the system can optimize and correct SQL queries based on user inputs, efficiently handling complex query tasks within insurance databases. * **Insurance Knowledge Compliance:** Equipped with the Insur-Know module, the system supports contract parsing and fact extraction enhanced by retrieval, ensuring accurate handling of complex issues in insurance contracts. Additionally, our research offers the following contributions: * **High-quality insurance question-answering training datasets and effective training paradigms** * **A comprehensive insurance model evaluation framework and evaluation datasets** ## Insur-QA Dataset In the basic insurance knowledge section, we translated the InsuranceQA dataset to create the InsuranceQA_zh dataset. For the insurance contract data section, we downloaded PDF insurance policies from various insurance companies available online and parsed them using the Adobe PDF Extract API. After restructuring the paragraph text from the parsed results, we used Gemini to generate QA pairs, forming <Q, A, E> triples. The specific composition of the datasets is as follows: <table border="1"> <tr> <th>Task</th> <th>Dataset</th> <th>Source</th> <th>Size</th> </tr> <tr> <td rowspan="2">Basic Insurance Knowledge Q&A</td> <td>Training Set</td> <td>BX_GPT3.5</td> <td>10k</td> </tr> <tr> <td>Test Set</td> <td>Insurance_QA_zh</td> <td>3k</td> </tr> <tr> <td rowspan="2">Insurance Contract Q&A</td> <td>Training Set</td> <td>Insurance Contracts</td> <td>40k</td> </tr> <tr> <td>Test Set</td> <td>Insurance Contracts</td> <td>100</td> </tr> <tr> <td rowspan="2">Insurance Database Q&A</td> <td>Training Set</td> <td>Insurance Contracts</td> <td>44k</td> </tr> <tr> <td>Test Set</td> <td>Insurance Contracts</td> <td>546</td> </tr> </table> ## Citation If you find our work helpful in your research, please consider citing it as follows: ``` @misc{ } ``` ## License InsQABench is available under the Apache License.
Gnider/aug_bert_6ep_9000
Gnider
2024-10-07T13:00:51Z
106
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-07T12:43:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
adishourya/results__fullrun__0710-111627
adishourya
2024-10-07T12:53:31Z
8
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:google/paligemma-3b-mix-224", "base_model:adapter:google/paligemma-3b-mix-224", "license:gemma", "region:us" ]
null
2024-10-07T09:16:47Z
--- base_model: google/paligemma-3b-mix-224 library_name: peft license: gemma tags: - generated_from_trainer model-index: - name: results__fullrun__0710-111627 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results__fullrun__0710-111627 This model is a fine-tuned version of [google/paligemma-3b-mix-224](https://huggingface.co/google/paligemma-3b-mix-224) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 2 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.13.0 - Transformers 4.45.1 - Pytorch 2.3.0.post101 - Datasets 2.19.1 - Tokenizers 0.19.1
qinxianliu/FUE-v1
qinxianliu
2024-10-07T12:52:58Z
1,219
0
sentence-transformers
[ "sentence-transformers", "safetensors", "mpnet", "feature-extraction", "sentence-similarity", "mteb", "en", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
feature-extraction
2024-09-09T06:03:02Z
--- language: en tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb model-index: - name: FUE-v1 results: - dataset: config: default name: MTEB ArguAna (default) revision: c22ab2a51041ffd869aaddef7af8d8215647e41a split: test type: mteb/arguana metrics: - type: main_score value: 28.538000000000004 - type: map_at_1 value: 12.518 - type: map_at_10 value: 22.123 - type: map_at_100 value: 23.69 - type: map_at_1000 value: 23.753 - type: map_at_20 value: 23.143 - type: map_at_3 value: 18.255 - type: map_at_5 value: 20.101 - type: mrr_at_1 value: 12.51778093883357 - type: mrr_at_10 value: 22.1181896181896 - type: mrr_at_100 value: 23.691403907137428 - type: mrr_at_1000 value: 23.754663976195605 - type: mrr_at_20 value: 23.14463626403498 - type: mrr_at_3 value: 18.25509720246565 - type: mrr_at_5 value: 20.090090090090076 - type: nauc_map_at_1000_diff1 value: 9.323271975614725 - type: nauc_map_at_1000_max value: 14.254988429417367 - type: nauc_map_at_1000_std value: 0.6719763425462996 - type: nauc_map_at_100_diff1 value: 9.282199840240164 - type: nauc_map_at_100_max value: 14.292222368573587 - type: nauc_map_at_100_std value: 0.7292665505578078 - type: nauc_map_at_10_diff1 value: 9.281903499487566 - type: nauc_map_at_10_max value: 13.866426442021092 - type: nauc_map_at_10_std value: 0.25246485902079857 - type: nauc_map_at_1_diff1 value: 17.7280206384371 - type: nauc_map_at_1_max value: 10.815824715688484 - type: nauc_map_at_1_std value: -2.743162847857448 - type: nauc_map_at_20_diff1 value: 9.102539119027215 - type: nauc_map_at_20_max value: 14.315213373754535 - type: nauc_map_at_20_std value: 0.4814890472114564 - type: nauc_map_at_3_diff1 value: 11.182980254921844 - type: nauc_map_at_3_max value: 12.459436078396347 - type: nauc_map_at_3_std value: -0.556503984217633 - type: nauc_map_at_5_diff1 value: 10.337883923056356 - type: nauc_map_at_5_max value: 13.156434240892349 - type: nauc_map_at_5_std value: -0.29354150586109307 - type: nauc_mrr_at_1000_diff1 value: 9.324556497025348 - type: nauc_mrr_at_1000_max value: 14.253533211305847 - type: nauc_mrr_at_1000_std value: 0.6697271579920419 - type: nauc_mrr_at_100_diff1 value: 9.283500049480148 - type: nauc_mrr_at_100_max value: 14.29077459198726 - type: nauc_mrr_at_100_std value: 0.7270254698558568 - type: nauc_mrr_at_10_diff1 value: 9.307586753777215 - type: nauc_mrr_at_10_max value: 13.867744805840864 - type: nauc_mrr_at_10_std value: 0.24609376657604679 - type: nauc_mrr_at_1_diff1 value: 17.7280206384371 - type: nauc_mrr_at_1_max value: 10.815824715688484 - type: nauc_mrr_at_1_std value: -2.743162847857448 - type: nauc_mrr_at_20_diff1 value: 9.104040005863022 - type: nauc_mrr_at_20_max value: 14.313919541370158 - type: nauc_mrr_at_20_std value: 0.47929028753819247 - type: nauc_mrr_at_3_diff1 value: 11.182980254921844 - type: nauc_mrr_at_3_max value: 12.459436078396347 - type: nauc_mrr_at_3_std value: -0.556503984217633 - type: nauc_mrr_at_5_diff1 value: 10.292026698958562 - type: nauc_mrr_at_5_max value: 13.162311512830788 - type: nauc_mrr_at_5_std value: -0.279085086218627 - type: nauc_ndcg_at_1000_diff1 value: 7.3163309764159825 - type: nauc_ndcg_at_1000_max value: 16.010286453339404 - type: nauc_ndcg_at_1000_std value: 2.607890495864114 - type: nauc_ndcg_at_100_diff1 value: 6.345607449579556 - type: nauc_ndcg_at_100_max value: 17.107603089582387 - type: nauc_ndcg_at_100_std value: 4.24098978361439 - type: nauc_ndcg_at_10_diff1 value: 5.8109195734245125 - type: nauc_ndcg_at_10_max value: 15.749698291184078 - type: nauc_ndcg_at_10_std value: 1.7791956012747472 - type: nauc_ndcg_at_1_diff1 value: 17.7280206384371 - type: nauc_ndcg_at_1_max value: 10.815824715688484 - type: nauc_ndcg_at_1_std value: -2.743162847857448 - type: nauc_ndcg_at_20_diff1 value: 4.8931445052931535 - type: nauc_ndcg_at_20_max value: 17.242324916281724 - type: nauc_ndcg_at_20_std value: 2.5398984271374716 - type: nauc_ndcg_at_3_diff1 value: 9.692595930124401 - type: nauc_ndcg_at_3_max value: 13.040710081661585 - type: nauc_ndcg_at_3_std value: 0.04190136287761992 - type: nauc_ndcg_at_5_diff1 value: 8.29716057792536 - type: nauc_ndcg_at_5_max value: 14.202672828576501 - type: nauc_ndcg_at_5_std value: 0.4915852638473377 - type: nauc_precision_at_1000_diff1 value: -4.340157000443621 - type: nauc_precision_at_1000_max value: 47.2664467039377 - type: nauc_precision_at_1000_std value: 55.01988662253597 - type: nauc_precision_at_100_diff1 value: -7.7805105013646445 - type: nauc_precision_at_100_max value: 35.464559183683306 - type: nauc_precision_at_100_std value: 32.179756475948615 - type: nauc_precision_at_10_diff1 value: -2.8054318748220983 - type: nauc_precision_at_10_max value: 20.50642719670412 - type: nauc_precision_at_10_std value: 5.658488748996167 - type: nauc_precision_at_1_diff1 value: 17.7280206384371 - type: nauc_precision_at_1_max value: 10.815824715688484 - type: nauc_precision_at_1_std value: -2.743162847857448 - type: nauc_precision_at_20_diff1 value: -8.125102884571286 - type: nauc_precision_at_20_max value: 26.61981123742234 - type: nauc_precision_at_20_std value: 9.015277052264246 - type: nauc_precision_at_3_diff1 value: 6.293777341889125 - type: nauc_precision_at_3_max value: 14.423054474164651 - type: nauc_precision_at_3_std value: 1.4341093423522946 - type: nauc_precision_at_5_diff1 value: 3.7181074720510505 - type: nauc_precision_at_5_max value: 16.654168420354303 - type: nauc_precision_at_5_std value: 2.2783035538057934 - type: nauc_recall_at_1000_diff1 value: -4.340157000443143 - type: nauc_recall_at_1000_max value: 47.26644670393844 - type: nauc_recall_at_1000_std value: 55.01988662253498 - type: nauc_recall_at_100_diff1 value: -7.780510501364643 - type: nauc_recall_at_100_max value: 35.46455918368321 - type: nauc_recall_at_100_std value: 32.179756475948565 - type: nauc_recall_at_10_diff1 value: -2.8054318748220988 - type: nauc_recall_at_10_max value: 20.50642719670411 - type: nauc_recall_at_10_std value: 5.658488748996162 - type: nauc_recall_at_1_diff1 value: 17.7280206384371 - type: nauc_recall_at_1_max value: 10.815824715688484 - type: nauc_recall_at_1_std value: -2.743162847857448 - type: nauc_recall_at_20_diff1 value: -8.125102884571232 - type: nauc_recall_at_20_max value: 26.619811237422397 - type: nauc_recall_at_20_std value: 9.015277052264283 - type: nauc_recall_at_3_diff1 value: 6.293777341889125 - type: nauc_recall_at_3_max value: 14.423054474164635 - type: nauc_recall_at_3_std value: 1.4341093423523195 - type: nauc_recall_at_5_diff1 value: 3.7181074720510505 - type: nauc_recall_at_5_max value: 16.65416842035427 - type: nauc_recall_at_5_std value: 2.278303553805766 - type: ndcg_at_1 value: 12.518 - type: ndcg_at_10 value: 28.538000000000004 - type: ndcg_at_100 value: 36.315 - type: ndcg_at_1000 value: 37.905 - type: ndcg_at_20 value: 32.235 - type: ndcg_at_3 value: 20.277 - type: ndcg_at_5 value: 23.625 - type: precision_at_1 value: 12.518 - type: precision_at_10 value: 4.957000000000001 - type: precision_at_100 value: 0.864 - type: precision_at_1000 value: 0.099 - type: precision_at_20 value: 3.2079999999999997 - type: precision_at_3 value: 8.725 - type: precision_at_5 value: 6.8709999999999996 - type: recall_at_1 value: 12.518 - type: recall_at_10 value: 49.573 - type: recall_at_100 value: 86.41499999999999 - type: recall_at_1000 value: 98.72 - type: recall_at_20 value: 64.154 - type: recall_at_3 value: 26.173999999999996 - type: recall_at_5 value: 34.353 task: type: Retrieval - dataset: config: default name: MTEB AskUbuntuDupQuestions (default) revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 split: test type: mteb/askubuntudupquestions-reranking metrics: - type: main_score value: 66.82497796556063 - type: map value: 66.82497796556063 - type: mrr value: 79.41322604757507 - type: nAUC_map_diff1 value: 12.416876133855089 - type: nAUC_map_max value: 26.748567859708082 - type: nAUC_map_std value: 17.369392917676496 - type: nAUC_mrr_diff1 value: 24.35046473918137 - type: nAUC_mrr_max value: 41.748545887921786 - type: nAUC_mrr_std value: 20.095859022985742 task: type: Reranking - dataset: config: default name: MTEB BIOSSES (default) revision: d3fb88f8f02e40887cd149695127462bbcf29b4a split: test type: mteb/biosses-sts metrics: - type: cosine_pearson value: 77.26959202649614 - type: cosine_spearman value: 72.60346154803956 - type: euclidean_pearson value: 75.60706813872336 - type: euclidean_spearman value: 72.60346154803956 - type: main_score value: 72.60346154803956 - type: manhattan_pearson value: 74.85441649457519 - type: manhattan_spearman value: 71.82211844340206 - type: pearson value: 77.26960125718689 - type: spearman value: 72.60346154803956 task: type: STS - dataset: config: default name: MTEB CQADupstackAndroidRetrieval (default) revision: f46a197baaae43b4f621051089b82a364682dfeb split: test type: mteb/cqadupstack-android metrics: - type: main_score value: 52.916 - type: map_at_1 value: 32.649 - type: map_at_10 value: 45.772 - type: map_at_100 value: 47.457 - type: map_at_1000 value: 47.599999999999994 - type: map_at_20 value: 46.725 - type: map_at_3 value: 41.692 - type: map_at_5 value: 44.046 - type: mrr_at_1 value: 40.629470672389125 - type: mrr_at_10 value: 51.90226400526829 - type: mrr_at_100 value: 52.55196625361068 - type: mrr_at_1000 value: 52.59335586794691 - type: mrr_at_20 value: 52.2668351149164 - type: mrr_at_3 value: 49.40391034811636 - type: mrr_at_5 value: 51.09203624225081 - type: nauc_map_at_1000_diff1 value: 45.289329882047205 - type: nauc_map_at_1000_max value: 39.28004476774031 - type: nauc_map_at_1000_std value: -10.47186686937888 - type: nauc_map_at_100_diff1 value: 45.27568029871466 - type: nauc_map_at_100_max value: 39.30223620117373 - type: nauc_map_at_100_std value: -10.410329154110126 - type: nauc_map_at_10_diff1 value: 45.788018257879656 - type: nauc_map_at_10_max value: 38.87347267479215 - type: nauc_map_at_10_std value: -11.99683042659094 - type: nauc_map_at_1_diff1 value: 53.35615479144534 - type: nauc_map_at_1_max value: 31.475294365337536 - type: nauc_map_at_1_std value: -14.003054216071481 - type: nauc_map_at_20_diff1 value: 45.490662130375064 - type: nauc_map_at_20_max value: 39.090398903912536 - type: nauc_map_at_20_std value: -10.96175771068962 - type: nauc_map_at_3_diff1 value: 46.8301634760078 - type: nauc_map_at_3_max value: 36.2671266498893 - type: nauc_map_at_3_std value: -13.38271575807636 - type: nauc_map_at_5_diff1 value: 46.28657566818991 - type: nauc_map_at_5_max value: 37.8207988324964 - type: nauc_map_at_5_std value: -12.314684453880659 - type: nauc_mrr_at_1000_diff1 value: 43.14531221378786 - type: nauc_mrr_at_1000_max value: 41.03230454152977 - type: nauc_mrr_at_1000_std value: -7.879125647271273 - type: nauc_mrr_at_100_diff1 value: 43.12790875666598 - type: nauc_mrr_at_100_max value: 41.03454971653736 - type: nauc_mrr_at_100_std value: -7.858776508352652 - type: nauc_mrr_at_10_diff1 value: 42.87454261242089 - type: nauc_mrr_at_10_max value: 41.108557872693055 - type: nauc_mrr_at_10_std value: -8.099855590270796 - type: nauc_mrr_at_1_diff1 value: 49.61818728047409 - type: nauc_mrr_at_1_max value: 38.804257214142154 - type: nauc_mrr_at_1_std value: -10.72284382304455 - type: nauc_mrr_at_20_diff1 value: 43.0725399972107 - type: nauc_mrr_at_20_max value: 41.08996625932272 - type: nauc_mrr_at_20_std value: -7.8159114035841695 - type: nauc_mrr_at_3_diff1 value: 43.267966736078975 - type: nauc_mrr_at_3_max value: 40.36006635996485 - type: nauc_mrr_at_3_std value: -8.754877467052037 - type: nauc_mrr_at_5_diff1 value: 42.75118896375678 - type: nauc_mrr_at_5_max value: 40.91174373590108 - type: nauc_mrr_at_5_std value: -8.082572960635977 - type: nauc_ndcg_at_1000_diff1 value: 42.8206024836842 - type: nauc_ndcg_at_1000_max value: 41.23107259743807 - type: nauc_ndcg_at_1000_std value: -7.267656950359476 - type: nauc_ndcg_at_100_diff1 value: 42.28641440933444 - type: nauc_ndcg_at_100_max value: 41.6734450569554 - type: nauc_ndcg_at_100_std value: -6.224022095206258 - type: nauc_ndcg_at_10_diff1 value: 42.753045687362324 - type: nauc_ndcg_at_10_max value: 41.47728394469051 - type: nauc_ndcg_at_10_std value: -9.82176692905538 - type: nauc_ndcg_at_1_diff1 value: 49.61818728047409 - type: nauc_ndcg_at_1_max value: 38.804257214142154 - type: nauc_ndcg_at_1_std value: -10.72284382304455 - type: nauc_ndcg_at_20_diff1 value: 42.79059001163042 - type: nauc_ndcg_at_20_max value: 41.45466723327685 - type: nauc_ndcg_at_20_std value: -7.8811099324857095 - type: nauc_ndcg_at_3_diff1 value: 42.777535675427956 - type: nauc_ndcg_at_3_max value: 39.005245346467646 - type: nauc_ndcg_at_3_std value: -9.754879407450163 - type: nauc_ndcg_at_5_diff1 value: 42.66583257245142 - type: nauc_ndcg_at_5_max value: 40.326265568150504 - type: nauc_ndcg_at_5_std value: -9.230270533786904 - type: nauc_precision_at_1000_diff1 value: -16.70348014597805 - type: nauc_precision_at_1000_max value: -8.386803902715592 - type: nauc_precision_at_1000_std value: 0.13502316171005296 - type: nauc_precision_at_100_diff1 value: -15.998085065118264 - type: nauc_precision_at_100_max value: 8.956110379153944 - type: nauc_precision_at_100_std value: 16.51962286328538 - type: nauc_precision_at_10_diff1 value: -1.001533004655409 - type: nauc_precision_at_10_max value: 29.82358647130004 - type: nauc_precision_at_10_std value: 10.818324954671196 - type: nauc_precision_at_1_diff1 value: 49.61818728047409 - type: nauc_precision_at_1_max value: 38.804257214142154 - type: nauc_precision_at_1_std value: -10.72284382304455 - type: nauc_precision_at_20_diff1 value: -7.810474487909365 - type: nauc_precision_at_20_max value: 21.77756355634436 - type: nauc_precision_at_20_std value: 16.63426939452981 - type: nauc_precision_at_3_diff1 value: 18.512689708793893 - type: nauc_precision_at_3_max value: 39.095246356190245 - type: nauc_precision_at_3_std value: -1.0810033734203999 - type: nauc_precision_at_5_diff1 value: 8.138752687073158 - type: nauc_precision_at_5_max value: 36.10702475124429 - type: nauc_precision_at_5_std value: 7.333980251486291 - type: nauc_recall_at_1000_diff1 value: 24.267062537529018 - type: nauc_recall_at_1000_max value: 54.99733856577079 - type: nauc_recall_at_1000_std value: 35.868095521705776 - type: nauc_recall_at_100_diff1 value: 26.74575223566034 - type: nauc_recall_at_100_max value: 47.652482792272785 - type: nauc_recall_at_100_std value: 19.054850321156742 - type: nauc_recall_at_10_diff1 value: 34.19555084006223 - type: nauc_recall_at_10_max value: 41.8550922310514 - type: nauc_recall_at_10_std value: -8.506390007838977 - type: nauc_recall_at_1_diff1 value: 53.35615479144534 - type: nauc_recall_at_1_max value: 31.475294365337536 - type: nauc_recall_at_1_std value: -14.003054216071481 - type: nauc_recall_at_20_diff1 value: 33.77586137392995 - type: nauc_recall_at_20_max value: 42.954168251101486 - type: nauc_recall_at_20_std value: 0.3955721013883589 - type: nauc_recall_at_3_diff1 value: 38.445298323492345 - type: nauc_recall_at_3_max value: 35.55313976386901 - type: nauc_recall_at_3_std value: -11.509187665960084 - type: nauc_recall_at_5_diff1 value: 35.789287343837884 - type: nauc_recall_at_5_max value: 38.63482405526856 - type: nauc_recall_at_5_std value: -8.350167399589925 - type: ndcg_at_1 value: 40.629 - type: ndcg_at_10 value: 52.916 - type: ndcg_at_100 value: 58.07600000000001 - type: ndcg_at_1000 value: 59.73500000000001 - type: ndcg_at_20 value: 54.974000000000004 - type: ndcg_at_3 value: 47.547 - type: ndcg_at_5 value: 50.295 - type: precision_at_1 value: 40.629 - type: precision_at_10 value: 10.700999999999999 - type: precision_at_100 value: 1.6820000000000002 - type: precision_at_1000 value: 0.22300000000000003 - type: precision_at_20 value: 6.345000000000001 - type: precision_at_3 value: 23.796 - type: precision_at_5 value: 17.596999999999998 - type: recall_at_1 value: 32.649 - type: recall_at_10 value: 66.116 - type: recall_at_100 value: 87.51 - type: recall_at_1000 value: 97.829 - type: recall_at_20 value: 73.379 - type: recall_at_3 value: 50.613 - type: recall_at_5 value: 58.01 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackEnglishRetrieval (default) revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 split: test type: mteb/cqadupstack-english metrics: - type: main_score value: 43.505 - type: map_at_1 value: 27.195000000000004 - type: map_at_10 value: 37.580000000000005 - type: map_at_100 value: 39.01 - type: map_at_1000 value: 39.159 - type: map_at_20 value: 38.330999999999996 - type: map_at_3 value: 34.412 - type: map_at_5 value: 36.193 - type: mrr_at_1 value: 35.28662420382165 - type: mrr_at_10 value: 43.60006571630778 - type: mrr_at_100 value: 44.353842651523586 - type: mrr_at_1000 value: 44.4053387476111 - type: mrr_at_20 value: 44.04638739149498 - type: mrr_at_3 value: 41.18895966029725 - type: mrr_at_5 value: 42.61252653927815 - type: nauc_map_at_1000_diff1 value: 42.38618266865594 - type: nauc_map_at_1000_max value: 41.491554181716175 - type: nauc_map_at_1000_std value: -3.5386577633054737 - type: nauc_map_at_100_diff1 value: 42.40564292548819 - type: nauc_map_at_100_max value: 41.45373569198577 - type: nauc_map_at_100_std value: -3.6625477908993473 - type: nauc_map_at_10_diff1 value: 42.625665623454125 - type: nauc_map_at_10_max value: 40.51903544452516 - type: nauc_map_at_10_std value: -5.536441154309886 - type: nauc_map_at_1_diff1 value: 47.914129207001 - type: nauc_map_at_1_max value: 36.47093132755044 - type: nauc_map_at_1_std value: -9.621259227944329 - type: nauc_map_at_20_diff1 value: 42.57383348544318 - type: nauc_map_at_20_max value: 41.02679079990154 - type: nauc_map_at_20_std value: -4.490511334672925 - type: nauc_map_at_3_diff1 value: 43.59030079409757 - type: nauc_map_at_3_max value: 39.93229570655855 - type: nauc_map_at_3_std value: -7.175841169162778 - type: nauc_map_at_5_diff1 value: 42.98743128411056 - type: nauc_map_at_5_max value: 40.150925311516275 - type: nauc_map_at_5_std value: -6.424502709519393 - type: nauc_mrr_at_1000_diff1 value: 42.41841677865114 - type: nauc_mrr_at_1000_max value: 42.247852660053745 - type: nauc_mrr_at_1000_std value: -1.165540535556555 - type: nauc_mrr_at_100_diff1 value: 42.420657267186726 - type: nauc_mrr_at_100_max value: 42.251825675553704 - type: nauc_mrr_at_100_std value: -1.1636278747245774 - type: nauc_mrr_at_10_diff1 value: 42.3138037346923 - type: nauc_mrr_at_10_max value: 42.10074065067146 - type: nauc_mrr_at_10_std value: -1.6076100571015888 - type: nauc_mrr_at_1_diff1 value: 46.14573077561728 - type: nauc_mrr_at_1_max value: 42.061927948085334 - type: nauc_mrr_at_1_std value: -3.7673030766056828 - type: nauc_mrr_at_20_diff1 value: 42.40273873695689 - type: nauc_mrr_at_20_max value: 42.171375510351766 - type: nauc_mrr_at_20_std value: -1.3515543593263308 - type: nauc_mrr_at_3_diff1 value: 42.65327763586051 - type: nauc_mrr_at_3_max value: 42.60487343560702 - type: nauc_mrr_at_3_std value: -2.017909554093815 - type: nauc_mrr_at_5_diff1 value: 42.55441855170127 - type: nauc_mrr_at_5_max value: 42.24074898539688 - type: nauc_mrr_at_5_std value: -1.6335691035307471 - type: nauc_ndcg_at_1000_diff1 value: 40.99947003301228 - type: nauc_ndcg_at_1000_max value: 42.59731002851968 - type: nauc_ndcg_at_1000_std value: 1.5617506389566693 - type: nauc_ndcg_at_100_diff1 value: 41.05947202800858 - type: nauc_ndcg_at_100_max value: 42.655256081496375 - type: nauc_ndcg_at_100_std value: 1.1275622124800324 - type: nauc_ndcg_at_10_diff1 value: 40.71646296399764 - type: nauc_ndcg_at_10_max value: 41.32474748899915 - type: nauc_ndcg_at_10_std value: -2.838548394405895 - type: nauc_ndcg_at_1_diff1 value: 46.14573077561728 - type: nauc_ndcg_at_1_max value: 42.061927948085334 - type: nauc_ndcg_at_1_std value: -3.7673030766056828 - type: nauc_ndcg_at_20_diff1 value: 40.94701485601509 - type: nauc_ndcg_at_20_max value: 41.89909312421838 - type: nauc_ndcg_at_20_std value: -1.0729170787288922 - type: nauc_ndcg_at_3_diff1 value: 41.57176168658056 - type: nauc_ndcg_at_3_max value: 42.089267442299075 - type: nauc_ndcg_at_3_std value: -3.6656009457600476 - type: nauc_ndcg_at_5_diff1 value: 41.312525235264545 - type: nauc_ndcg_at_5_max value: 41.40459679814617 - type: nauc_ndcg_at_5_std value: -3.607343043079315 - type: nauc_precision_at_1000_diff1 value: -10.389355556009154 - type: nauc_precision_at_1000_max value: 11.213997730937681 - type: nauc_precision_at_1000_std value: 30.484993965189755 - type: nauc_precision_at_100_diff1 value: -4.589336722169161 - type: nauc_precision_at_100_max value: 23.61692037737193 - type: nauc_precision_at_100_std value: 34.58390587538388 - type: nauc_precision_at_10_diff1 value: 11.420232344757583 - type: nauc_precision_at_10_max value: 33.61211581898657 - type: nauc_precision_at_10_std value: 17.67212437703975 - type: nauc_precision_at_1_diff1 value: 46.14573077561728 - type: nauc_precision_at_1_max value: 42.061927948085334 - type: nauc_precision_at_1_std value: -3.7673030766056828 - type: nauc_precision_at_20_diff1 value: 5.338962369182836 - type: nauc_precision_at_20_max value: 31.49712758851038 - type: nauc_precision_at_20_std value: 26.273239812959265 - type: nauc_precision_at_3_diff1 value: 25.760340841656195 - type: nauc_precision_at_3_max value: 40.78701062437991 - type: nauc_precision_at_3_std value: 6.786760881569201 - type: nauc_precision_at_5_diff1 value: 20.210043555954318 - type: nauc_precision_at_5_max value: 37.031291554404085 - type: nauc_precision_at_5_std value: 10.611181228801739 - type: nauc_recall_at_1000_diff1 value: 33.476332225623814 - type: nauc_recall_at_1000_max value: 47.867568065614016 - type: nauc_recall_at_1000_std value: 43.50634640789991 - type: nauc_recall_at_100_diff1 value: 35.07854220105017 - type: nauc_recall_at_100_max value: 42.9081089829942 - type: nauc_recall_at_100_std value: 19.93173296454809 - type: nauc_recall_at_10_diff1 value: 35.186657922090845 - type: nauc_recall_at_10_max value: 36.89789950808192 - type: nauc_recall_at_10_std value: -3.0377254637259083 - type: nauc_recall_at_1_diff1 value: 47.914129207001 - type: nauc_recall_at_1_max value: 36.47093132755044 - type: nauc_recall_at_1_std value: -9.621259227944329 - type: nauc_recall_at_20_diff1 value: 35.33528482662295 - type: nauc_recall_at_20_max value: 38.328398726744055 - type: nauc_recall_at_20_std value: 3.9623726501092533 - type: nauc_recall_at_3_diff1 value: 39.2886333384052 - type: nauc_recall_at_3_max value: 38.57303028073727 - type: nauc_recall_at_3_std value: -6.903543957557018 - type: nauc_recall_at_5_diff1 value: 37.06028417057189 - type: nauc_recall_at_5_max value: 36.99251102238125 - type: nauc_recall_at_5_std value: -5.207245708092033 - type: ndcg_at_1 value: 35.287 - type: ndcg_at_10 value: 43.505 - type: ndcg_at_100 value: 48.502 - type: ndcg_at_1000 value: 50.782000000000004 - type: ndcg_at_20 value: 45.37 - type: ndcg_at_3 value: 39.074 - type: ndcg_at_5 value: 41.123 - type: precision_at_1 value: 35.287 - type: precision_at_10 value: 8.643 - type: precision_at_100 value: 1.4829999999999999 - type: precision_at_1000 value: 0.202 - type: precision_at_20 value: 5.188000000000001 - type: precision_at_3 value: 19.49 - type: precision_at_5 value: 13.975000000000001 - type: recall_at_1 value: 27.195000000000004 - type: recall_at_10 value: 54.089 - type: recall_at_100 value: 75.381 - type: recall_at_1000 value: 89.83 - type: recall_at_20 value: 60.99 - type: recall_at_3 value: 40.556 - type: recall_at_5 value: 46.573 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackGamingRetrieval (default) revision: 4885aa143210c98657558c04aaf3dc47cfb54340 split: test type: mteb/cqadupstack-gaming metrics: - type: main_score value: 52.82599999999999 - type: map_at_1 value: 35.96 - type: map_at_10 value: 47.109 - type: map_at_100 value: 48.227 - type: map_at_1000 value: 48.294 - type: map_at_20 value: 47.742000000000004 - type: map_at_3 value: 43.856 - type: map_at_5 value: 45.73 - type: mrr_at_1 value: 41.2539184952978 - type: mrr_at_10 value: 50.4839279494452 - type: mrr_at_100 value: 51.18997195908982 - type: mrr_at_1000 value: 51.22588707203708 - type: mrr_at_20 value: 50.87826541247888 - type: mrr_at_3 value: 47.732497387669824 - type: mrr_at_5 value: 49.31556948798333 - type: nauc_map_at_1000_diff1 value: 50.21912063624032 - type: nauc_map_at_1000_max value: 41.05492601368038 - type: nauc_map_at_1000_std value: -1.8335758997830354 - type: nauc_map_at_100_diff1 value: 50.191466738388776 - type: nauc_map_at_100_max value: 41.04781068836281 - type: nauc_map_at_100_std value: -1.8487668121623901 - type: nauc_map_at_10_diff1 value: 50.299475818245554 - type: nauc_map_at_10_max value: 40.57210666375838 - type: nauc_map_at_10_std value: -2.5349867924738354 - type: nauc_map_at_1_diff1 value: 54.309305748182524 - type: nauc_map_at_1_max value: 34.78537970357836 - type: nauc_map_at_1_std value: -4.367654821096338 - type: nauc_map_at_20_diff1 value: 50.17956579459495 - type: nauc_map_at_20_max value: 40.845935693363586 - type: nauc_map_at_20_std value: -2.189911133302338 - type: nauc_map_at_3_diff1 value: 50.863917087098066 - type: nauc_map_at_3_max value: 39.469485934751866 - type: nauc_map_at_3_std value: -4.334326050046052 - type: nauc_map_at_5_diff1 value: 50.53806135770955 - type: nauc_map_at_5_max value: 39.751056790635424 - type: nauc_map_at_5_std value: -3.486238828551465 - type: nauc_mrr_at_1000_diff1 value: 50.3689275270778 - type: nauc_mrr_at_1000_max value: 42.72789427349376 - type: nauc_mrr_at_1000_std value: -0.6875106248393903 - type: nauc_mrr_at_100_diff1 value: 50.35361102734404 - type: nauc_mrr_at_100_max value: 42.743613265352224 - type: nauc_mrr_at_100_std value: -0.6686536663032981 - type: nauc_mrr_at_10_diff1 value: 50.25968474187666 - type: nauc_mrr_at_10_max value: 42.746468116591906 - type: nauc_mrr_at_10_std value: -0.7353469482521312 - type: nauc_mrr_at_1_diff1 value: 54.681271473002916 - type: nauc_mrr_at_1_max value: 40.441585770844284 - type: nauc_mrr_at_1_std value: -2.839590354418767 - type: nauc_mrr_at_20_diff1 value: 50.30458869022417 - type: nauc_mrr_at_20_max value: 42.69148052936814 - type: nauc_mrr_at_20_std value: -0.7260575052437486 - type: nauc_mrr_at_3_diff1 value: 50.69068675878361 - type: nauc_mrr_at_3_max value: 42.54723124779581 - type: nauc_mrr_at_3_std value: -1.9548419929611167 - type: nauc_mrr_at_5_diff1 value: 50.37284804647469 - type: nauc_mrr_at_5_max value: 42.48933463080673 - type: nauc_mrr_at_5_std value: -1.1959350211193 - type: nauc_ndcg_at_1000_diff1 value: 49.13669459478487 - type: nauc_ndcg_at_1000_max value: 43.09193372090789 - type: nauc_ndcg_at_1000_std value: 1.0255400585004846 - type: nauc_ndcg_at_100_diff1 value: 48.46036764721693 - type: nauc_ndcg_at_100_max value: 43.337874144896745 - type: nauc_ndcg_at_100_std value: 1.4268868889619024 - type: nauc_ndcg_at_10_diff1 value: 48.5501585301524 - type: nauc_ndcg_at_10_max value: 42.38370635551507 - type: nauc_ndcg_at_10_std value: -0.126958393912763 - type: nauc_ndcg_at_1_diff1 value: 54.681271473002916 - type: nauc_ndcg_at_1_max value: 40.441585770844284 - type: nauc_ndcg_at_1_std value: -2.839590354418767 - type: nauc_ndcg_at_20_diff1 value: 48.37089809404846 - type: nauc_ndcg_at_20_max value: 42.53664952827513 - type: nauc_ndcg_at_20_std value: 0.051941093126791994 - type: nauc_ndcg_at_3_diff1 value: 49.486133964537785 - type: nauc_ndcg_at_3_max value: 41.262493607776804 - type: nauc_ndcg_at_3_std value: -2.7967155168398428 - type: nauc_ndcg_at_5_diff1 value: 48.96714073924463 - type: nauc_ndcg_at_5_max value: 41.323528047385636 - type: nauc_ndcg_at_5_std value: -1.5158330808056293 - type: nauc_precision_at_1000_diff1 value: -7.026558402946765 - type: nauc_precision_at_1000_max value: 19.486730125805913 - type: nauc_precision_at_1000_std value: 27.926974867437256 - type: nauc_precision_at_100_diff1 value: -2.2036370386128104 - type: nauc_precision_at_100_max value: 28.340445317172758 - type: nauc_precision_at_100_std value: 28.516212546705543 - type: nauc_precision_at_10_diff1 value: 17.901652875127454 - type: nauc_precision_at_10_max value: 39.46667014199858 - type: nauc_precision_at_10_std value: 14.874676790136363 - type: nauc_precision_at_1_diff1 value: 54.681271473002916 - type: nauc_precision_at_1_max value: 40.441585770844284 - type: nauc_precision_at_1_std value: -2.839590354418767 - type: nauc_precision_at_20_diff1 value: 9.992442152246879 - type: nauc_precision_at_20_max value: 35.87159722623395 - type: nauc_precision_at_20_std value: 19.593433922664403 - type: nauc_precision_at_3_diff1 value: 33.43340071813058 - type: nauc_precision_at_3_max value: 42.813495259558984 - type: nauc_precision_at_3_std value: 2.319939520883305 - type: nauc_precision_at_5_diff1 value: 26.73151055105659 - type: nauc_precision_at_5_max value: 40.42707721448163 - type: nauc_precision_at_5_std value: 7.084075741117675 - type: nauc_recall_at_1000_diff1 value: 40.976362341621 - type: nauc_recall_at_1000_max value: 65.29728663348455 - type: nauc_recall_at_1000_std value: 49.444127154114526 - type: nauc_recall_at_100_diff1 value: 33.852534329005536 - type: nauc_recall_at_100_max value: 53.02330599056479 - type: nauc_recall_at_100_std value: 23.794773788370286 - type: nauc_recall_at_10_diff1 value: 40.67129797892841 - type: nauc_recall_at_10_max value: 42.66444189741527 - type: nauc_recall_at_10_std value: 4.429365961370951 - type: nauc_recall_at_1_diff1 value: 54.309305748182524 - type: nauc_recall_at_1_max value: 34.78537970357836 - type: nauc_recall_at_1_std value: -4.367654821096338 - type: nauc_recall_at_20_diff1 value: 38.71080752436736 - type: nauc_recall_at_20_max value: 43.5624279616423 - type: nauc_recall_at_20_std value: 6.624168124956635 - type: nauc_recall_at_3_diff1 value: 45.11133844611515 - type: nauc_recall_at_3_max value: 39.73140743866134 - type: nauc_recall_at_3_std value: -4.333260589935666 - type: nauc_recall_at_5_diff1 value: 43.2408330778742 - type: nauc_recall_at_5_max value: 39.765735398976986 - type: nauc_recall_at_5_std value: -0.5671079053603477 - type: ndcg_at_1 value: 41.254000000000005 - type: ndcg_at_10 value: 52.82599999999999 - type: ndcg_at_100 value: 57.333 - type: ndcg_at_1000 value: 58.714 - type: ndcg_at_20 value: 54.559000000000005 - type: ndcg_at_3 value: 47.064 - type: ndcg_at_5 value: 49.91 - type: precision_at_1 value: 41.254000000000005 - type: precision_at_10 value: 8.577 - type: precision_at_100 value: 1.1900000000000002 - type: precision_at_1000 value: 0.135 - type: precision_at_20 value: 4.84 - type: precision_at_3 value: 20.899 - type: precision_at_5 value: 14.571000000000002 - type: recall_at_1 value: 35.96 - type: recall_at_10 value: 66.52799999999999 - type: recall_at_100 value: 86.284 - type: recall_at_1000 value: 96.279 - type: recall_at_20 value: 72.914 - type: recall_at_3 value: 51.03 - type: recall_at_5 value: 57.959 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackGisRetrieval (default) revision: 5003b3064772da1887988e05400cf3806fe491f2 split: test type: mteb/cqadupstack-gis metrics: - type: main_score value: 41.058 - type: map_at_1 value: 26.669999999999998 - type: map_at_10 value: 35.673 - type: map_at_100 value: 36.869 - type: map_at_1000 value: 36.954 - type: map_at_20 value: 36.355 - type: map_at_3 value: 32.696999999999996 - type: map_at_5 value: 34.316 - type: mrr_at_1 value: 28.70056497175141 - type: mrr_at_10 value: 37.57080082503811 - type: mrr_at_100 value: 38.57318940120763 - type: mrr_at_1000 value: 38.628923670653904 - type: mrr_at_20 value: 38.142966912998894 - type: mrr_at_3 value: 34.7080979284369 - type: mrr_at_5 value: 36.27306967984933 - type: nauc_map_at_1000_diff1 value: 31.95025505717376 - type: nauc_map_at_1000_max value: 27.445344687308342 - type: nauc_map_at_1000_std value: -3.766659179631952 - type: nauc_map_at_100_diff1 value: 31.904113858753202 - type: nauc_map_at_100_max value: 27.43329141824181 - type: nauc_map_at_100_std value: -3.751434380621304 - type: nauc_map_at_10_diff1 value: 32.26387206275712 - type: nauc_map_at_10_max value: 27.27250131155855 - type: nauc_map_at_10_std value: -4.2570700844827805 - type: nauc_map_at_1_diff1 value: 37.676610770755644 - type: nauc_map_at_1_max value: 24.743059176207872 - type: nauc_map_at_1_std value: -7.899143095387766 - type: nauc_map_at_20_diff1 value: 31.870529576581475 - type: nauc_map_at_20_max value: 27.305874069658703 - type: nauc_map_at_20_std value: -3.879867258167858 - type: nauc_map_at_3_diff1 value: 33.69937975192626 - type: nauc_map_at_3_max value: 27.2785915528158 - type: nauc_map_at_3_std value: -4.550393882628957 - type: nauc_map_at_5_diff1 value: 32.50268016494544 - type: nauc_map_at_5_max value: 26.628666672017225 - type: nauc_map_at_5_std value: -4.888374245634325 - type: nauc_mrr_at_1000_diff1 value: 30.688875736244718 - type: nauc_mrr_at_1000_max value: 28.36168526315933 - type: nauc_mrr_at_1000_std value: -2.4134356134739363 - type: nauc_mrr_at_100_diff1 value: 30.643548147379505 - type: nauc_mrr_at_100_max value: 28.360927430391857 - type: nauc_mrr_at_100_std value: -2.388432251569784 - type: nauc_mrr_at_10_diff1 value: 30.838160632926026 - type: nauc_mrr_at_10_max value: 28.274232987739524 - type: nauc_mrr_at_10_std value: -2.6455491371420234 - type: nauc_mrr_at_1_diff1 value: 36.333747251739936 - type: nauc_mrr_at_1_max value: 27.09211690724867 - type: nauc_mrr_at_1_std value: -6.872327181645408 - type: nauc_mrr_at_20_diff1 value: 30.566159689071643 - type: nauc_mrr_at_20_max value: 28.254100153054484 - type: nauc_mrr_at_20_std value: -2.3863086501910877 - type: nauc_mrr_at_3_diff1 value: 31.995970169795008 - type: nauc_mrr_at_3_max value: 28.672649281172863 - type: nauc_mrr_at_3_std value: -3.0253479479372682 - type: nauc_mrr_at_5_diff1 value: 30.925479033010074 - type: nauc_mrr_at_5_max value: 27.894579265110913 - type: nauc_mrr_at_5_std value: -3.1633756284644305 - type: nauc_ndcg_at_1000_diff1 value: 29.480108448835164 - type: nauc_ndcg_at_1000_max value: 28.694910139113766 - type: nauc_ndcg_at_1000_std value: -0.9685609700216138 - type: nauc_ndcg_at_100_diff1 value: 28.20394217817361 - type: nauc_ndcg_at_100_max value: 28.718549400317933 - type: nauc_ndcg_at_100_std value: -0.2052052223285665 - type: nauc_ndcg_at_10_diff1 value: 29.33527460830841 - type: nauc_ndcg_at_10_max value: 28.100629016562795 - type: nauc_ndcg_at_10_std value: -1.9043904359384647 - type: nauc_ndcg_at_1_diff1 value: 36.333747251739936 - type: nauc_ndcg_at_1_max value: 27.09211690724867 - type: nauc_ndcg_at_1_std value: -6.872327181645408 - type: nauc_ndcg_at_20_diff1 value: 28.12694047381911 - type: nauc_ndcg_at_20_max value: 28.07256049681584 - type: nauc_ndcg_at_20_std value: -0.7546400633868358 - type: nauc_ndcg_at_3_diff1 value: 31.87777938588317 - type: nauc_ndcg_at_3_max value: 28.084297522561176 - type: nauc_ndcg_at_3_std value: -3.092215463329312 - type: nauc_ndcg_at_5_diff1 value: 29.881507389621103 - type: nauc_ndcg_at_5_max value: 26.823659437194475 - type: nauc_ndcg_at_5_std value: -3.351691772718416 - type: nauc_precision_at_1000_diff1 value: -6.384965239026326 - type: nauc_precision_at_1000_max value: 10.400043080009187 - type: nauc_precision_at_1000_std value: 13.493069987475284 - type: nauc_precision_at_100_diff1 value: -3.8311477783636785 - type: nauc_precision_at_100_max value: 21.313719573692566 - type: nauc_precision_at_100_std value: 15.340019805905872 - type: nauc_precision_at_10_diff1 value: 14.879866186868682 - type: nauc_precision_at_10_max value: 29.443484927548557 - type: nauc_precision_at_10_std value: 5.190205795872693 - type: nauc_precision_at_1_diff1 value: 36.333747251739936 - type: nauc_precision_at_1_max value: 27.09211690724867 - type: nauc_precision_at_1_std value: -6.872327181645408 - type: nauc_precision_at_20_diff1 value: 6.776608893898066 - type: nauc_precision_at_20_max value: 25.915514134442724 - type: nauc_precision_at_20_std value: 10.25138083695759 - type: nauc_precision_at_3_diff1 value: 24.58655147167322 - type: nauc_precision_at_3_max value: 32.0175630253561 - type: nauc_precision_at_3_std value: 1.4274592250651807 - type: nauc_precision_at_5_diff1 value: 18.590483368382866 - type: nauc_precision_at_5_max value: 28.253561736970234 - type: nauc_precision_at_5_std value: 0.829570400364922 - type: nauc_recall_at_1000_diff1 value: 10.810130884218827 - type: nauc_recall_at_1000_max value: 47.13075325263327 - type: nauc_recall_at_1000_std value: 33.15494499163207 - type: nauc_recall_at_100_diff1 value: 8.125366968609814 - type: nauc_recall_at_100_max value: 33.38380343426024 - type: nauc_recall_at_100_std value: 18.279628794274075 - type: nauc_recall_at_10_diff1 value: 20.649934311742626 - type: nauc_recall_at_10_max value: 28.749953838066926 - type: nauc_recall_at_10_std value: 4.047023543340581 - type: nauc_recall_at_1_diff1 value: 37.676610770755644 - type: nauc_recall_at_1_max value: 24.743059176207872 - type: nauc_recall_at_1_std value: -7.899143095387766 - type: nauc_recall_at_20_diff1 value: 14.783244276844618 - type: nauc_recall_at_20_max value: 28.373124736783172 - type: nauc_recall_at_20_std value: 9.128634320360753 - type: nauc_recall_at_3_diff1 value: 28.176705702681026 - type: nauc_recall_at_3_max value: 28.30143052234742 - type: nauc_recall_at_3_std value: -0.04083763472538744 - type: nauc_recall_at_5_diff1 value: 23.192105206068955 - type: nauc_recall_at_5_max value: 25.258497503610215 - type: nauc_recall_at_5_std value: -0.5987707205459003 - type: ndcg_at_1 value: 28.701 - type: ndcg_at_10 value: 41.058 - type: ndcg_at_100 value: 46.632 - type: ndcg_at_1000 value: 48.662 - type: ndcg_at_20 value: 43.363 - type: ndcg_at_3 value: 35.132999999999996 - type: ndcg_at_5 value: 37.881 - type: precision_at_1 value: 28.701 - type: precision_at_10 value: 6.3950000000000005 - type: precision_at_100 value: 0.9690000000000001 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_20 value: 3.7510000000000003 - type: precision_at_3 value: 14.84 - type: precision_at_5 value: 10.508000000000001 - type: recall_at_1 value: 26.669999999999998 - type: recall_at_10 value: 55.92 - type: recall_at_100 value: 80.867 - type: recall_at_1000 value: 95.906 - type: recall_at_20 value: 64.586 - type: recall_at_3 value: 39.92 - type: recall_at_5 value: 46.396 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackMathematicaRetrieval (default) revision: 90fceea13679c63fe563ded68f3b6f06e50061de split: test type: mteb/cqadupstack-mathematica metrics: - type: main_score value: 31.507 - type: map_at_1 value: 16.398 - type: map_at_10 value: 25.485000000000003 - type: map_at_100 value: 26.967999999999996 - type: map_at_1000 value: 27.084999999999997 - type: map_at_20 value: 26.35 - type: map_at_3 value: 22.332 - type: map_at_5 value: 23.958 - type: mrr_at_1 value: 20.398009950248756 - type: mrr_at_10 value: 29.76614941167181 - type: mrr_at_100 value: 30.86747432087463 - type: mrr_at_1000 value: 30.919927410511455 - type: mrr_at_20 value: 30.443181439265278 - type: mrr_at_3 value: 26.637645107794373 - type: mrr_at_5 value: 28.31674958540631 - type: nauc_map_at_1000_diff1 value: 31.6249291191536 - type: nauc_map_at_1000_max value: 29.55962360137207 - type: nauc_map_at_1000_std value: 5.215881981383439 - type: nauc_map_at_100_diff1 value: 31.597179077805414 - type: nauc_map_at_100_max value: 29.53626392781722 - type: nauc_map_at_100_std value: 5.205272245991525 - type: nauc_map_at_10_diff1 value: 31.60302262185177 - type: nauc_map_at_10_max value: 28.72958244724668 - type: nauc_map_at_10_std value: 4.591243705917117 - type: nauc_map_at_1_diff1 value: 37.68937044932118 - type: nauc_map_at_1_max value: 27.785294663519327 - type: nauc_map_at_1_std value: 2.281125893959806 - type: nauc_map_at_20_diff1 value: 31.569405397103345 - type: nauc_map_at_20_max value: 29.5374594662604 - type: nauc_map_at_20_std value: 5.062837955779829 - type: nauc_map_at_3_diff1 value: 32.846629596852864 - type: nauc_map_at_3_max value: 28.935149877956366 - type: nauc_map_at_3_std value: 4.043651949362703 - type: nauc_map_at_5_diff1 value: 32.20424309358551 - type: nauc_map_at_5_max value: 28.84626720504408 - type: nauc_map_at_5_std value: 4.480982141190721 - type: nauc_mrr_at_1000_diff1 value: 29.91514359849449 - type: nauc_mrr_at_1000_max value: 29.4880837184256 - type: nauc_mrr_at_1000_std value: 6.90169972042484 - type: nauc_mrr_at_100_diff1 value: 29.90225503162752 - type: nauc_mrr_at_100_max value: 29.468862402041644 - type: nauc_mrr_at_100_std value: 6.891889857381346 - type: nauc_mrr_at_10_diff1 value: 29.935063368574966 - type: nauc_mrr_at_10_max value: 29.144965203735662 - type: nauc_mrr_at_10_std value: 6.69773269545311 - type: nauc_mrr_at_1_diff1 value: 34.64233696145788 - type: nauc_mrr_at_1_max value: 29.76861665629048 - type: nauc_mrr_at_1_std value: 6.050508042950772 - type: nauc_mrr_at_20_diff1 value: 29.869194769562036 - type: nauc_mrr_at_20_max value: 29.52599000601207 - type: nauc_mrr_at_20_std value: 6.833276125615728 - type: nauc_mrr_at_3_diff1 value: 30.73272542293855 - type: nauc_mrr_at_3_max value: 29.646125101813958 - type: nauc_mrr_at_3_std value: 6.596409692221532 - type: nauc_mrr_at_5_diff1 value: 30.22116393198506 - type: nauc_mrr_at_5_max value: 29.3005889974291 - type: nauc_mrr_at_5_std value: 6.761086751620502 - type: nauc_ndcg_at_1000_diff1 value: 29.53517185395757 - type: nauc_ndcg_at_1000_max value: 30.156269324153744 - type: nauc_ndcg_at_1000_std value: 7.756076732606718 - type: nauc_ndcg_at_100_diff1 value: 28.978423923406538 - type: nauc_ndcg_at_100_max value: 29.7055702302561 - type: nauc_ndcg_at_100_std value: 7.725429773260483 - type: nauc_ndcg_at_10_diff1 value: 28.95114011689634 - type: nauc_ndcg_at_10_max value: 28.050859118724443 - type: nauc_ndcg_at_10_std value: 5.564584606153562 - type: nauc_ndcg_at_1_diff1 value: 34.64233696145788 - type: nauc_ndcg_at_1_max value: 29.76861665629048 - type: nauc_ndcg_at_1_std value: 6.050508042950772 - type: nauc_ndcg_at_20_diff1 value: 28.792293298047504 - type: nauc_ndcg_at_20_max value: 30.26649029003995 - type: nauc_ndcg_at_20_std value: 6.692147001644501 - type: nauc_ndcg_at_3_diff1 value: 31.22691508724979 - type: nauc_ndcg_at_3_max value: 29.41685209008128 - type: nauc_ndcg_at_3_std value: 5.287699533527526 - type: nauc_ndcg_at_5_diff1 value: 30.126889228701526 - type: nauc_ndcg_at_5_max value: 28.811536881225603 - type: nauc_ndcg_at_5_std value: 5.567866298638633 - type: nauc_precision_at_1000_diff1 value: 1.0315570861883616 - type: nauc_precision_at_1000_max value: 5.444321907094073 - type: nauc_precision_at_1000_std value: 3.0310745219226525 - type: nauc_precision_at_100_diff1 value: 7.18318986657559 - type: nauc_precision_at_100_max value: 17.459722160298842 - type: nauc_precision_at_100_std value: 10.082153389290994 - type: nauc_precision_at_10_diff1 value: 17.142578413214434 - type: nauc_precision_at_10_max value: 26.846895769037225 - type: nauc_precision_at_10_std value: 8.568196201489595 - type: nauc_precision_at_1_diff1 value: 34.64233696145788 - type: nauc_precision_at_1_max value: 29.76861665629048 - type: nauc_precision_at_1_std value: 6.050508042950772 - type: nauc_precision_at_20_diff1 value: 13.674761586839344 - type: nauc_precision_at_20_max value: 29.83399743832858 - type: nauc_precision_at_20_std value: 11.562042971033899 - type: nauc_precision_at_3_diff1 value: 25.01590537073653 - type: nauc_precision_at_3_max value: 30.253033767323938 - type: nauc_precision_at_3_std value: 7.3087944205161515 - type: nauc_precision_at_5_diff1 value: 20.975487011820988 - type: nauc_precision_at_5_max value: 29.173537748534212 - type: nauc_precision_at_5_std value: 8.945752465905947 - type: nauc_recall_at_1000_diff1 value: 18.53296507398216 - type: nauc_recall_at_1000_max value: 34.927013467781165 - type: nauc_recall_at_1000_std value: 34.7934249117797 - type: nauc_recall_at_100_diff1 value: 18.01406190276854 - type: nauc_recall_at_100_max value: 25.35501374220775 - type: nauc_recall_at_100_std value: 18.19308339219603 - type: nauc_recall_at_10_diff1 value: 21.406147428374446 - type: nauc_recall_at_10_max value: 22.774943611615736 - type: nauc_recall_at_10_std value: 5.707280310402414 - type: nauc_recall_at_1_diff1 value: 37.68937044932118 - type: nauc_recall_at_1_max value: 27.785294663519327 - type: nauc_recall_at_1_std value: 2.281125893959806 - type: nauc_recall_at_20_diff1 value: 20.227099777176857 - type: nauc_recall_at_20_max value: 29.344314385773657 - type: nauc_recall_at_20_std value: 9.170201756879665 - type: nauc_recall_at_3_diff1 value: 26.785287089852357 - type: nauc_recall_at_3_max value: 26.721156438701684 - type: nauc_recall_at_3_std value: 5.14517396691279 - type: nauc_recall_at_5_diff1 value: 24.972251162551565 - type: nauc_recall_at_5_max value: 25.44929193960884 - type: nauc_recall_at_5_std value: 5.572691905709665 - type: ndcg_at_1 value: 20.398 - type: ndcg_at_10 value: 31.507 - type: ndcg_at_100 value: 38.116 - type: ndcg_at_1000 value: 40.564 - type: ndcg_at_20 value: 34.268 - type: ndcg_at_3 value: 25.358000000000004 - type: ndcg_at_5 value: 28.03 - type: precision_at_1 value: 20.398 - type: precision_at_10 value: 6.157 - type: precision_at_100 value: 1.0959999999999999 - type: precision_at_1000 value: 0.14400000000000002 - type: precision_at_20 value: 3.862 - type: precision_at_3 value: 12.272 - type: precision_at_5 value: 9.179 - type: recall_at_1 value: 16.398 - type: recall_at_10 value: 45.774 - type: recall_at_100 value: 74.099 - type: recall_at_1000 value: 90.979 - type: recall_at_20 value: 55.507 - type: recall_at_3 value: 29.176999999999996 - type: recall_at_5 value: 35.682 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackPhysicsRetrieval (default) revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 split: test type: mteb/cqadupstack-physics metrics: - type: main_score value: 45.95 - type: map_at_1 value: 28.138999999999996 - type: map_at_10 value: 39.495000000000005 - type: map_at_100 value: 40.949000000000005 - type: map_at_1000 value: 41.07 - type: map_at_20 value: 40.300999999999995 - type: map_at_3 value: 35.853 - type: map_at_5 value: 38.004 - type: mrr_at_1 value: 34.93743984600577 - type: mrr_at_10 value: 44.76801564385776 - type: mrr_at_100 value: 45.65247517257724 - type: mrr_at_1000 value: 45.68956227855384 - type: mrr_at_20 value: 45.29344639385099 - type: mrr_at_3 value: 41.819056785370506 - type: mrr_at_5 value: 43.58036573628482 - type: nauc_map_at_1000_diff1 value: 47.277436726452734 - type: nauc_map_at_1000_max value: 35.37747274882577 - type: nauc_map_at_1000_std value: -1.7620121730183462 - type: nauc_map_at_100_diff1 value: 47.29530211449703 - type: nauc_map_at_100_max value: 35.3267759107826 - type: nauc_map_at_100_std value: -1.9003611271296315 - type: nauc_map_at_10_diff1 value: 47.66123826774245 - type: nauc_map_at_10_max value: 34.898894823839974 - type: nauc_map_at_10_std value: -2.7367802679721382 - type: nauc_map_at_1_diff1 value: 53.39584452417071 - type: nauc_map_at_1_max value: 30.44376469140723 - type: nauc_map_at_1_std value: -6.88828726087523 - type: nauc_map_at_20_diff1 value: 47.3836198057246 - type: nauc_map_at_20_max value: 35.0413537966224 - type: nauc_map_at_20_std value: -2.545827885505845 - type: nauc_map_at_3_diff1 value: 48.16522210457787 - type: nauc_map_at_3_max value: 34.3965506492862 - type: nauc_map_at_3_std value: -3.3433431726479834 - type: nauc_map_at_5_diff1 value: 47.98417848216568 - type: nauc_map_at_5_max value: 34.73390747566828 - type: nauc_map_at_5_std value: -2.8612602838895826 - type: nauc_mrr_at_1000_diff1 value: 45.71218991600523 - type: nauc_mrr_at_1000_max value: 39.11582764653062 - type: nauc_mrr_at_1000_std value: 2.2783759668804344 - type: nauc_mrr_at_100_diff1 value: 45.70154738130718 - type: nauc_mrr_at_100_max value: 39.11273087180276 - type: nauc_mrr_at_100_std value: 2.2710830483092987 - type: nauc_mrr_at_10_diff1 value: 45.642234982637824 - type: nauc_mrr_at_10_max value: 38.92779723339438 - type: nauc_mrr_at_10_std value: 1.9256549539298882 - type: nauc_mrr_at_1_diff1 value: 50.71909609236056 - type: nauc_mrr_at_1_max value: 38.27951880430563 - type: nauc_mrr_at_1_std value: 0.6510875710711332 - type: nauc_mrr_at_20_diff1 value: 45.695383873021726 - type: nauc_mrr_at_20_max value: 39.022209591680394 - type: nauc_mrr_at_20_std value: 2.0413367142919605 - type: nauc_mrr_at_3_diff1 value: 45.64927722234226 - type: nauc_mrr_at_3_max value: 39.19282954961338 - type: nauc_mrr_at_3_std value: 2.474812709395244 - type: nauc_mrr_at_5_diff1 value: 45.80017070276982 - type: nauc_mrr_at_5_max value: 39.34045012221159 - type: nauc_mrr_at_5_std value: 2.3303744020843107 - type: nauc_ndcg_at_1000_diff1 value: 45.1794919744782 - type: nauc_ndcg_at_1000_max value: 37.73921904631251 - type: nauc_ndcg_at_1000_std value: 2.26624679124494 - type: nauc_ndcg_at_100_diff1 value: 44.73702317994642 - type: nauc_ndcg_at_100_max value: 37.0759462132415 - type: nauc_ndcg_at_100_std value: 1.4931392395285414 - type: nauc_ndcg_at_10_diff1 value: 45.25647711557264 - type: nauc_ndcg_at_10_max value: 35.70645701721464 - type: nauc_ndcg_at_10_std value: -1.4004314196958476 - type: nauc_ndcg_at_1_diff1 value: 50.71909609236056 - type: nauc_ndcg_at_1_max value: 38.27951880430563 - type: nauc_ndcg_at_1_std value: 0.6510875710711332 - type: nauc_ndcg_at_20_diff1 value: 44.76174297209143 - type: nauc_ndcg_at_20_max value: 35.822466503188686 - type: nauc_ndcg_at_20_std value: -1.0518640293785047 - type: nauc_ndcg_at_3_diff1 value: 45.068368130065146 - type: nauc_ndcg_at_3_max value: 37.02529090108255 - type: nauc_ndcg_at_3_std value: 0.2157989475242898 - type: nauc_ndcg_at_5_diff1 value: 45.67384784064928 - type: nauc_ndcg_at_5_max value: 36.47208549375304 - type: nauc_ndcg_at_5_std value: -0.47643849090429163 - type: nauc_precision_at_1000_diff1 value: -17.125501936260747 - type: nauc_precision_at_1000_max value: 9.558040560420771 - type: nauc_precision_at_1000_std value: 31.800567902705996 - type: nauc_precision_at_100_diff1 value: -10.12964985687511 - type: nauc_precision_at_100_max value: 17.81597956519217 - type: nauc_precision_at_100_std value: 28.681055685422674 - type: nauc_precision_at_10_diff1 value: 11.455616137810187 - type: nauc_precision_at_10_max value: 32.707133153254944 - type: nauc_precision_at_10_std value: 15.18726232222438 - type: nauc_precision_at_1_diff1 value: 50.71909609236056 - type: nauc_precision_at_1_max value: 38.27951880430563 - type: nauc_precision_at_1_std value: 0.6510875710711332 - type: nauc_precision_at_20_diff1 value: 3.249020258643051 - type: nauc_precision_at_20_max value: 27.111481841291123 - type: nauc_precision_at_20_std value: 17.15563337285341 - type: nauc_precision_at_3_diff1 value: 25.76995146388162 - type: nauc_precision_at_3_max value: 38.48807924293779 - type: nauc_precision_at_3_std value: 10.764232529972658 - type: nauc_precision_at_5_diff1 value: 19.709759176886067 - type: nauc_precision_at_5_max value: 36.27102876926324 - type: nauc_precision_at_5_std value: 13.660107662673637 - type: nauc_recall_at_1000_diff1 value: 45.983533767225694 - type: nauc_recall_at_1000_max value: 57.47414401478259 - type: nauc_recall_at_1000_std value: 48.046202249413845 - type: nauc_recall_at_100_diff1 value: 32.07866475386506 - type: nauc_recall_at_100_max value: 33.96383964758634 - type: nauc_recall_at_100_std value: 12.252531516000694 - type: nauc_recall_at_10_diff1 value: 37.975164089633495 - type: nauc_recall_at_10_max value: 30.871630327404432 - type: nauc_recall_at_10_std value: -3.471373265508202 - type: nauc_recall_at_1_diff1 value: 53.39584452417071 - type: nauc_recall_at_1_max value: 30.44376469140723 - type: nauc_recall_at_1_std value: -6.88828726087523 - type: nauc_recall_at_20_diff1 value: 35.405722893633786 - type: nauc_recall_at_20_max value: 30.02056108542106 - type: nauc_recall_at_20_std value: -2.612688358596724 - type: nauc_recall_at_3_diff1 value: 41.68732922410159 - type: nauc_recall_at_3_max value: 33.76501870587258 - type: nauc_recall_at_3_std value: -2.1010829302018146 - type: nauc_recall_at_5_diff1 value: 41.076140933019545 - type: nauc_recall_at_5_max value: 33.11420354771476 - type: nauc_recall_at_5_std value: -2.37285059292278 - type: ndcg_at_1 value: 34.937000000000005 - type: ndcg_at_10 value: 45.95 - type: ndcg_at_100 value: 51.768 - type: ndcg_at_1000 value: 53.612 - type: ndcg_at_20 value: 48.309000000000005 - type: ndcg_at_3 value: 40.186 - type: ndcg_at_5 value: 43.111 - type: precision_at_1 value: 34.937000000000005 - type: precision_at_10 value: 8.73 - type: precision_at_100 value: 1.397 - type: precision_at_1000 value: 0.174 - type: precision_at_20 value: 5.135 - type: precision_at_3 value: 19.858999999999998 - type: precision_at_5 value: 14.456 - type: recall_at_1 value: 28.138999999999996 - type: recall_at_10 value: 59.646 - type: recall_at_100 value: 83.813 - type: recall_at_1000 value: 95.69800000000001 - type: recall_at_20 value: 68.09100000000001 - type: recall_at_3 value: 43.15 - type: recall_at_5 value: 50.876 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackProgrammersRetrieval (default) revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 split: test type: mteb/cqadupstack-programmers metrics: - type: main_score value: 38.568999999999996 - type: map_at_1 value: 23.288 - type: map_at_10 value: 32.771 - type: map_at_100 value: 34.146 - type: map_at_1000 value: 34.278 - type: map_at_20 value: 33.522999999999996 - type: map_at_3 value: 29.643000000000004 - type: map_at_5 value: 31.188 - type: mrr_at_1 value: 28.424657534246577 - type: mrr_at_10 value: 37.56790425454804 - type: mrr_at_100 value: 38.512708758710254 - type: mrr_at_1000 value: 38.58372536901956 - type: mrr_at_20 value: 38.13718150828231 - type: mrr_at_3 value: 34.874429223744286 - type: mrr_at_5 value: 36.34703196347027 - type: nauc_map_at_1000_diff1 value: 42.49932971514879 - type: nauc_map_at_1000_max value: 34.44449252084838 - type: nauc_map_at_1000_std value: 0.2820810427961318 - type: nauc_map_at_100_diff1 value: 42.48316842005717 - type: nauc_map_at_100_max value: 34.439643249096655 - type: nauc_map_at_100_std value: 0.2911248084637576 - type: nauc_map_at_10_diff1 value: 42.591951745001865 - type: nauc_map_at_10_max value: 33.848598117618984 - type: nauc_map_at_10_std value: -0.23217641524494498 - type: nauc_map_at_1_diff1 value: 48.9034806154442 - type: nauc_map_at_1_max value: 29.896917123056483 - type: nauc_map_at_1_std value: -2.439150046473878 - type: nauc_map_at_20_diff1 value: 42.52245588485265 - type: nauc_map_at_20_max value: 34.33890957691394 - type: nauc_map_at_20_std value: 0.04133780224148374 - type: nauc_map_at_3_diff1 value: 44.020947516815916 - type: nauc_map_at_3_max value: 33.15603327298969 - type: nauc_map_at_3_std value: -1.6868324998110078 - type: nauc_map_at_5_diff1 value: 42.82799596536379 - type: nauc_map_at_5_max value: 33.5235389139028 - type: nauc_map_at_5_std value: -1.233132343609442 - type: nauc_mrr_at_1000_diff1 value: 40.680718239040495 - type: nauc_mrr_at_1000_max value: 35.142026511262294 - type: nauc_mrr_at_1000_std value: 0.7212094420333764 - type: nauc_mrr_at_100_diff1 value: 40.674003566538424 - type: nauc_mrr_at_100_max value: 35.14079055322123 - type: nauc_mrr_at_100_std value: 0.73908883788415 - type: nauc_mrr_at_10_diff1 value: 40.65562886804231 - type: nauc_mrr_at_10_max value: 34.98579748882993 - type: nauc_mrr_at_10_std value: 0.4344145401378685 - type: nauc_mrr_at_1_diff1 value: 45.9859248383466 - type: nauc_mrr_at_1_max value: 33.57626905034157 - type: nauc_mrr_at_1_std value: -0.7348734835041326 - type: nauc_mrr_at_20_diff1 value: 40.6576258022829 - type: nauc_mrr_at_20_max value: 35.154195627553406 - type: nauc_mrr_at_20_std value: 0.634477977849603 - type: nauc_mrr_at_3_diff1 value: 41.488252532122885 - type: nauc_mrr_at_3_max value: 35.1085446216396 - type: nauc_mrr_at_3_std value: -0.26518839484649515 - type: nauc_mrr_at_5_diff1 value: 40.420541677182555 - type: nauc_mrr_at_5_max value: 35.22172462701993 - type: nauc_mrr_at_5_std value: -0.25848045409750375 - type: nauc_ndcg_at_1000_diff1 value: 40.546229948037514 - type: nauc_ndcg_at_1000_max value: 35.55631228234972 - type: nauc_ndcg_at_1000_std value: 2.9837182102880524 - type: nauc_ndcg_at_100_diff1 value: 40.09271767110789 - type: nauc_ndcg_at_100_max value: 35.854478264686854 - type: nauc_ndcg_at_100_std value: 3.792167239853245 - type: nauc_ndcg_at_10_diff1 value: 40.3083818339378 - type: nauc_ndcg_at_10_max value: 34.51597389995391 - type: nauc_ndcg_at_10_std value: 0.9186474992446686 - type: nauc_ndcg_at_1_diff1 value: 45.9859248383466 - type: nauc_ndcg_at_1_max value: 33.57626905034157 - type: nauc_ndcg_at_1_std value: -0.7348734835041326 - type: nauc_ndcg_at_20_diff1 value: 40.04167502138235 - type: nauc_ndcg_at_20_max value: 35.66297308624731 - type: nauc_ndcg_at_20_std value: 1.9289709947356306 - type: nauc_ndcg_at_3_diff1 value: 42.008639920375195 - type: nauc_ndcg_at_3_max value: 35.013390036837244 - type: nauc_ndcg_at_3_std value: -1.0593134654956005 - type: nauc_ndcg_at_5_diff1 value: 40.249687670087816 - type: nauc_ndcg_at_5_max value: 34.6889269216984 - type: nauc_ndcg_at_5_std value: -0.7707068993680779 - type: nauc_precision_at_1000_diff1 value: -8.682121420487263 - type: nauc_precision_at_1000_max value: 0.7494169797223946 - type: nauc_precision_at_1000_std value: 1.1392663482283298 - type: nauc_precision_at_100_diff1 value: 1.518759603729437 - type: nauc_precision_at_100_max value: 16.851433601465114 - type: nauc_precision_at_100_std value: 10.931374575070386 - type: nauc_precision_at_10_diff1 value: 16.794492920670503 - type: nauc_precision_at_10_max value: 32.02893820418532 - type: nauc_precision_at_10_std value: 6.61722505687105 - type: nauc_precision_at_1_diff1 value: 45.9859248383466 - type: nauc_precision_at_1_max value: 33.57626905034157 - type: nauc_precision_at_1_std value: -0.7348734835041326 - type: nauc_precision_at_20_diff1 value: 11.0013674864101 - type: nauc_precision_at_20_max value: 30.81307549333853 - type: nauc_precision_at_20_std value: 8.876146343359824 - type: nauc_precision_at_3_diff1 value: 30.358294150549263 - type: nauc_precision_at_3_max value: 38.543301462870424 - type: nauc_precision_at_3_std value: 2.867784756736603 - type: nauc_precision_at_5_diff1 value: 22.94487457136215 - type: nauc_precision_at_5_max value: 37.54100803628419 - type: nauc_precision_at_5_std value: 3.839791032775397 - type: nauc_recall_at_1000_diff1 value: 28.970034771420355 - type: nauc_recall_at_1000_max value: 38.787768893627785 - type: nauc_recall_at_1000_std value: 53.81040950703603 - type: nauc_recall_at_100_diff1 value: 29.583524837481896 - type: nauc_recall_at_100_max value: 37.202619359626986 - type: nauc_recall_at_100_std value: 24.086399035427473 - type: nauc_recall_at_10_diff1 value: 33.45660381923287 - type: nauc_recall_at_10_max value: 31.879186393042907 - type: nauc_recall_at_10_std value: 3.6546901922261297 - type: nauc_recall_at_1_diff1 value: 48.9034806154442 - type: nauc_recall_at_1_max value: 29.896917123056483 - type: nauc_recall_at_1_std value: -2.439150046473878 - type: nauc_recall_at_20_diff1 value: 31.821081500540053 - type: nauc_recall_at_20_max value: 35.5790141630673 - type: nauc_recall_at_20_std value: 7.549207597971351 - type: nauc_recall_at_3_diff1 value: 38.99447500514566 - type: nauc_recall_at_3_max value: 33.57706513619638 - type: nauc_recall_at_3_std value: -1.5778066212304678 - type: nauc_recall_at_5_diff1 value: 34.06626536124258 - type: nauc_recall_at_5_max value: 32.71331577702591 - type: nauc_recall_at_5_std value: -1.3814107710485413 - type: ndcg_at_1 value: 28.425 - type: ndcg_at_10 value: 38.568999999999996 - type: ndcg_at_100 value: 44.529 - type: ndcg_at_1000 value: 47.231 - type: ndcg_at_20 value: 40.877 - type: ndcg_at_3 value: 33.289 - type: ndcg_at_5 value: 35.480000000000004 - type: precision_at_1 value: 28.425 - type: precision_at_10 value: 7.317 - type: precision_at_100 value: 1.2109999999999999 - type: precision_at_1000 value: 0.163 - type: precision_at_20 value: 4.3549999999999995 - type: precision_at_3 value: 16.096 - type: precision_at_5 value: 11.507000000000001 - type: recall_at_1 value: 23.288 - type: recall_at_10 value: 51.010999999999996 - type: recall_at_100 value: 76.566 - type: recall_at_1000 value: 94.774 - type: recall_at_20 value: 59.24 - type: recall_at_3 value: 36.236000000000004 - type: recall_at_5 value: 42.243 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackRetrieval (default) revision: CQADupstackRetrieval_is_a_combined_dataset split: test type: CQADupstackRetrieval_is_a_combined_dataset metrics: - type: main_score value: 40.812583333333336 - type: ndcg_at_10 value: 40.812583333333336 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackStatsRetrieval (default) revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a split: test type: mteb/cqadupstack-stats metrics: - type: main_score value: 34.053 - type: map_at_1 value: 21.135 - type: map_at_10 value: 29.336000000000002 - type: map_at_100 value: 30.429000000000002 - type: map_at_1000 value: 30.523 - type: map_at_20 value: 29.946 - type: map_at_3 value: 26.751 - type: map_at_5 value: 28.147 - type: mrr_at_1 value: 24.079754601226995 - type: mrr_at_10 value: 31.98570941669101 - type: mrr_at_100 value: 32.87704399381717 - type: mrr_at_1000 value: 32.950784120634474 - type: mrr_at_20 value: 32.48647576315244 - type: mrr_at_3 value: 29.933537832310854 - type: mrr_at_5 value: 30.99182004089981 - type: nauc_map_at_1000_diff1 value: 40.54921025385474 - type: nauc_map_at_1000_max value: 34.77575879898821 - type: nauc_map_at_1000_std value: 8.948775266395762 - type: nauc_map_at_100_diff1 value: 40.54349367224357 - type: nauc_map_at_100_max value: 34.757684235242756 - type: nauc_map_at_100_std value: 8.930005514047473 - type: nauc_map_at_10_diff1 value: 40.6730737054752 - type: nauc_map_at_10_max value: 34.378525509672194 - type: nauc_map_at_10_std value: 7.989204273097626 - type: nauc_map_at_1_diff1 value: 49.37407707269144 - type: nauc_map_at_1_max value: 32.21678855112054 - type: nauc_map_at_1_std value: 3.854422840353318 - type: nauc_map_at_20_diff1 value: 40.632949610967124 - type: nauc_map_at_20_max value: 34.59900516061919 - type: nauc_map_at_20_std value: 8.565328686423722 - type: nauc_map_at_3_diff1 value: 41.277384595957884 - type: nauc_map_at_3_max value: 33.206185913755924 - type: nauc_map_at_3_std value: 5.717825876602881 - type: nauc_map_at_5_diff1 value: 40.81938763414844 - type: nauc_map_at_5_max value: 34.601640205395725 - type: nauc_map_at_5_std value: 7.959501488950282 - type: nauc_mrr_at_1000_diff1 value: 41.643791133769284 - type: nauc_mrr_at_1000_max value: 36.86860098522057 - type: nauc_mrr_at_1000_std value: 11.16203371889211 - type: nauc_mrr_at_100_diff1 value: 41.638414475074086 - type: nauc_mrr_at_100_max value: 36.8731893852322 - type: nauc_mrr_at_100_std value: 11.160133493075861 - type: nauc_mrr_at_10_diff1 value: 41.75277747893673 - type: nauc_mrr_at_10_max value: 36.66922318811368 - type: nauc_mrr_at_10_std value: 10.524888054922569 - type: nauc_mrr_at_1_diff1 value: 49.68943847753842 - type: nauc_mrr_at_1_max value: 35.88824714503076 - type: nauc_mrr_at_1_std value: 7.9139229045391515 - type: nauc_mrr_at_20_diff1 value: 41.6522451889601 - type: nauc_mrr_at_20_max value: 36.89632890368892 - type: nauc_mrr_at_20_std value: 11.00866443920579 - type: nauc_mrr_at_3_diff1 value: 41.740543551542395 - type: nauc_mrr_at_3_max value: 36.637108493657536 - type: nauc_mrr_at_3_std value: 9.550432374486645 - type: nauc_mrr_at_5_diff1 value: 41.74370752738844 - type: nauc_mrr_at_5_max value: 36.88456148864922 - type: nauc_mrr_at_5_std value: 10.424655726913185 - type: nauc_ndcg_at_1000_diff1 value: 38.51408332292014 - type: nauc_ndcg_at_1000_max value: 36.09571315118831 - type: nauc_ndcg_at_1000_std value: 13.474870360532162 - type: nauc_ndcg_at_100_diff1 value: 38.019174212684376 - type: nauc_ndcg_at_100_max value: 35.93119149466668 - type: nauc_ndcg_at_100_std value: 13.2297549585252 - type: nauc_ndcg_at_10_diff1 value: 38.916423405975245 - type: nauc_ndcg_at_10_max value: 35.08064464520372 - type: nauc_ndcg_at_10_std value: 9.782593893240483 - type: nauc_ndcg_at_1_diff1 value: 49.68943847753842 - type: nauc_ndcg_at_1_max value: 35.88824714503076 - type: nauc_ndcg_at_1_std value: 7.9139229045391515 - type: nauc_ndcg_at_20_diff1 value: 38.63750293826562 - type: nauc_ndcg_at_20_max value: 35.739247440468915 - type: nauc_ndcg_at_20_std value: 11.539481993254595 - type: nauc_ndcg_at_3_diff1 value: 39.28042269065498 - type: nauc_ndcg_at_3_max value: 34.63564871034042 - type: nauc_ndcg_at_3_std value: 7.49191071861962 - type: nauc_ndcg_at_5_diff1 value: 39.003890765463304 - type: nauc_ndcg_at_5_max value: 35.71265272584242 - type: nauc_ndcg_at_5_std value: 9.927346415193906 - type: nauc_precision_at_1000_diff1 value: -4.072989249391745 - type: nauc_precision_at_1000_max value: 19.44622146624579 - type: nauc_precision_at_1000_std value: 27.981364695896975 - type: nauc_precision_at_100_diff1 value: 7.321844285435558 - type: nauc_precision_at_100_max value: 30.52844606095081 - type: nauc_precision_at_100_std value: 31.072929769770514 - type: nauc_precision_at_10_diff1 value: 21.853502178071917 - type: nauc_precision_at_10_max value: 37.30474313788762 - type: nauc_precision_at_10_std value: 21.687548959106135 - type: nauc_precision_at_1_diff1 value: 49.68943847753842 - type: nauc_precision_at_1_max value: 35.88824714503076 - type: nauc_precision_at_1_std value: 7.9139229045391515 - type: nauc_precision_at_20_diff1 value: 17.749818352515526 - type: nauc_precision_at_20_max value: 35.765806180376416 - type: nauc_precision_at_20_std value: 25.411075024216252 - type: nauc_precision_at_3_diff1 value: 29.632039358651944 - type: nauc_precision_at_3_max value: 39.65790064788028 - type: nauc_precision_at_3_std value: 16.540404435515757 - type: nauc_precision_at_5_diff1 value: 25.65567821866759 - type: nauc_precision_at_5_max value: 41.39961363618346 - type: nauc_precision_at_5_std value: 22.5672145347772 - type: nauc_recall_at_1000_diff1 value: 25.719490205486846 - type: nauc_recall_at_1000_max value: 34.913128477025914 - type: nauc_recall_at_1000_std value: 45.9261202538083 - type: nauc_recall_at_100_diff1 value: 25.23717510439502 - type: nauc_recall_at_100_max value: 33.50919363278291 - type: nauc_recall_at_100_std value: 26.975483944416958 - type: nauc_recall_at_10_diff1 value: 32.84494569562565 - type: nauc_recall_at_10_max value: 32.92279016208031 - type: nauc_recall_at_10_std value: 11.273741379749255 - type: nauc_recall_at_1_diff1 value: 49.37407707269144 - type: nauc_recall_at_1_max value: 32.21678855112054 - type: nauc_recall_at_1_std value: 3.854422840353318 - type: nauc_recall_at_20_diff1 value: 31.136285249615742 - type: nauc_recall_at_20_max value: 34.60047592054581 - type: nauc_recall_at_20_std value: 17.167916916392333 - type: nauc_recall_at_3_diff1 value: 34.20242999662983 - type: nauc_recall_at_3_max value: 31.922366101615964 - type: nauc_recall_at_3_std value: 5.5073356683470065 - type: nauc_recall_at_5_diff1 value: 33.737565539089545 - type: nauc_recall_at_5_max value: 35.18290702820585 - type: nauc_recall_at_5_std value: 11.516166260788946 - type: ndcg_at_1 value: 24.08 - type: ndcg_at_10 value: 34.053 - type: ndcg_at_100 value: 39.214 - type: ndcg_at_1000 value: 41.672 - type: ndcg_at_20 value: 35.998999999999995 - type: ndcg_at_3 value: 29.429 - type: ndcg_at_5 value: 31.480000000000004 - type: precision_at_1 value: 24.08 - type: precision_at_10 value: 5.752 - type: precision_at_100 value: 0.906 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_20 value: 3.3970000000000002 - type: precision_at_3 value: 13.344000000000001 - type: precision_at_5 value: 9.417 - type: recall_at_1 value: 21.135 - type: recall_at_10 value: 45.82 - type: recall_at_100 value: 69.414 - type: recall_at_1000 value: 87.653 - type: recall_at_20 value: 53.044999999999995 - type: recall_at_3 value: 32.914 - type: recall_at_5 value: 37.967 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackTexRetrieval (default) revision: 46989137a86843e03a6195de44b09deda022eec7 split: test type: mteb/cqadupstack-tex metrics: - type: main_score value: 31.079 - type: map_at_1 value: 17.498 - type: map_at_10 value: 25.761 - type: map_at_100 value: 27.117 - type: map_at_1000 value: 27.249000000000002 - type: map_at_20 value: 26.487 - type: map_at_3 value: 22.8 - type: map_at_5 value: 24.404 - type: mrr_at_1 value: 21.92016517549897 - type: mrr_at_10 value: 29.885472312347773 - type: mrr_at_100 value: 30.9160324896262 - type: mrr_at_1000 value: 30.991168841711332 - type: mrr_at_20 value: 30.452602988671234 - type: mrr_at_3 value: 27.208075246616247 - type: mrr_at_5 value: 28.703257627896395 - type: nauc_map_at_1000_diff1 value: 31.106783614540877 - type: nauc_map_at_1000_max value: 28.6225885387827 - type: nauc_map_at_1000_std value: 6.0841248710509515 - type: nauc_map_at_100_diff1 value: 31.08489579492722 - type: nauc_map_at_100_max value: 28.557756475282236 - type: nauc_map_at_100_std value: 6.052003313810615 - type: nauc_map_at_10_diff1 value: 31.48175165089029 - type: nauc_map_at_10_max value: 28.385150030692763 - type: nauc_map_at_10_std value: 5.495659063667348 - type: nauc_map_at_1_diff1 value: 38.27868635676493 - type: nauc_map_at_1_max value: 26.68660889337588 - type: nauc_map_at_1_std value: 3.160872467743023 - type: nauc_map_at_20_diff1 value: 31.207065661313653 - type: nauc_map_at_20_max value: 28.409403987888872 - type: nauc_map_at_20_std value: 5.663005924021793 - type: nauc_map_at_3_diff1 value: 33.081360769026695 - type: nauc_map_at_3_max value: 28.052813330220555 - type: nauc_map_at_3_std value: 4.494259362042225 - type: nauc_map_at_5_diff1 value: 31.969111899422543 - type: nauc_map_at_5_max value: 27.92017739432563 - type: nauc_map_at_5_std value: 4.811795252340103 - type: nauc_mrr_at_1000_diff1 value: 29.29292415286063 - type: nauc_mrr_at_1000_max value: 29.213774281822502 - type: nauc_mrr_at_1000_std value: 7.033800724186415 - type: nauc_mrr_at_100_diff1 value: 29.2626300487415 - type: nauc_mrr_at_100_max value: 29.204727919066332 - type: nauc_mrr_at_100_std value: 7.045304952598549 - type: nauc_mrr_at_10_diff1 value: 29.387229358683616 - type: nauc_mrr_at_10_max value: 29.294544480801417 - type: nauc_mrr_at_10_std value: 6.775611131886977 - type: nauc_mrr_at_1_diff1 value: 34.788621456797806 - type: nauc_mrr_at_1_max value: 28.589390080521447 - type: nauc_mrr_at_1_std value: 5.031155628869166 - type: nauc_mrr_at_20_diff1 value: 29.269757265619585 - type: nauc_mrr_at_20_max value: 29.192868249024183 - type: nauc_mrr_at_20_std value: 6.895053312400131 - type: nauc_mrr_at_3_diff1 value: 30.45886158188182 - type: nauc_mrr_at_3_max value: 29.126637772912332 - type: nauc_mrr_at_3_std value: 5.887845131702013 - type: nauc_mrr_at_5_diff1 value: 29.5830069273922 - type: nauc_mrr_at_5_max value: 29.023599660723566 - type: nauc_mrr_at_5_std value: 6.333484322706432 - type: nauc_ndcg_at_1000_diff1 value: 28.299997368370704 - type: nauc_ndcg_at_1000_max value: 29.888737990959356 - type: nauc_ndcg_at_1000_std value: 8.946268519586546 - type: nauc_ndcg_at_100_diff1 value: 27.626227219466298 - type: nauc_ndcg_at_100_max value: 29.449363310563655 - type: nauc_ndcg_at_100_std value: 9.152695620470102 - type: nauc_ndcg_at_10_diff1 value: 28.712684866650427 - type: nauc_ndcg_at_10_max value: 29.193277467967206 - type: nauc_ndcg_at_10_std value: 6.998413207889944 - type: nauc_ndcg_at_1_diff1 value: 34.788621456797806 - type: nauc_ndcg_at_1_max value: 28.589390080521447 - type: nauc_ndcg_at_1_std value: 5.031155628869166 - type: nauc_ndcg_at_20_diff1 value: 28.12146242755772 - type: nauc_ndcg_at_20_max value: 28.9603042379236 - type: nauc_ndcg_at_20_std value: 7.4783571354648055 - type: nauc_ndcg_at_3_diff1 value: 30.7052417569527 - type: nauc_ndcg_at_3_max value: 29.20315655011043 - type: nauc_ndcg_at_3_std value: 5.388849801543028 - type: nauc_ndcg_at_5_diff1 value: 29.36802229342131 - type: nauc_ndcg_at_5_max value: 28.574869226548653 - type: nauc_ndcg_at_5_std value: 5.792067593971809 - type: nauc_precision_at_1000_diff1 value: -1.0955902695363287 - type: nauc_precision_at_1000_max value: 22.263203721640487 - type: nauc_precision_at_1000_std value: 15.962211276640174 - type: nauc_precision_at_100_diff1 value: 1.697299406682818 - type: nauc_precision_at_100_max value: 25.552155435237268 - type: nauc_precision_at_100_std value: 19.630389535049883 - type: nauc_precision_at_10_diff1 value: 13.297756573713581 - type: nauc_precision_at_10_max value: 30.477988337514095 - type: nauc_precision_at_10_std value: 11.930425415137833 - type: nauc_precision_at_1_diff1 value: 34.788621456797806 - type: nauc_precision_at_1_max value: 28.589390080521447 - type: nauc_precision_at_1_std value: 5.031155628869166 - type: nauc_precision_at_20_diff1 value: 9.074294316929793 - type: nauc_precision_at_20_max value: 28.035274038120022 - type: nauc_precision_at_20_std value: 13.372503140399717 - type: nauc_precision_at_3_diff1 value: 22.46944870859152 - type: nauc_precision_at_3_max value: 30.62699826426078 - type: nauc_precision_at_3_std value: 7.446849435999603 - type: nauc_precision_at_5_diff1 value: 17.272667706952305 - type: nauc_precision_at_5_max value: 29.24179840910647 - type: nauc_precision_at_5_std value: 8.644639430581861 - type: nauc_recall_at_1000_diff1 value: 12.871611027617524 - type: nauc_recall_at_1000_max value: 36.71435304856158 - type: nauc_recall_at_1000_std value: 28.620443354999125 - type: nauc_recall_at_100_diff1 value: 14.961011668882907 - type: nauc_recall_at_100_max value: 28.301006654430267 - type: nauc_recall_at_100_std value: 19.460217217927635 - type: nauc_recall_at_10_diff1 value: 21.85511402624238 - type: nauc_recall_at_10_max value: 27.432304380395227 - type: nauc_recall_at_10_std value: 8.78952978670911 - type: nauc_recall_at_1_diff1 value: 38.27868635676493 - type: nauc_recall_at_1_max value: 26.68660889337588 - type: nauc_recall_at_1_std value: 3.160872467743023 - type: nauc_recall_at_20_diff1 value: 19.752305046339135 - type: nauc_recall_at_20_max value: 26.236548609605155 - type: nauc_recall_at_20_std value: 10.087194624804992 - type: nauc_recall_at_3_diff1 value: 27.751788017220413 - type: nauc_recall_at_3_max value: 27.44474563712776 - type: nauc_recall_at_3_std value: 5.050595287046713 - type: nauc_recall_at_5_diff1 value: 24.20044216232329 - type: nauc_recall_at_5_max value: 26.30261864862886 - type: nauc_recall_at_5_std value: 5.945475035324601 - type: ndcg_at_1 value: 21.92 - type: ndcg_at_10 value: 31.079 - type: ndcg_at_100 value: 37.254 - type: ndcg_at_1000 value: 40.075 - type: ndcg_at_20 value: 33.331 - type: ndcg_at_3 value: 25.889 - type: ndcg_at_5 value: 28.253 - type: precision_at_1 value: 21.92 - type: precision_at_10 value: 6.032 - type: precision_at_100 value: 1.088 - type: precision_at_1000 value: 0.152 - type: precision_at_20 value: 3.6839999999999997 - type: precision_at_3 value: 12.549 - type: precision_at_5 value: 9.36 - type: recall_at_1 value: 17.498 - type: recall_at_10 value: 42.824 - type: recall_at_100 value: 70.537 - type: recall_at_1000 value: 90.416 - type: recall_at_20 value: 51.13799999999999 - type: recall_at_3 value: 28.374 - type: recall_at_5 value: 34.452 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackUnixRetrieval (default) revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 split: test type: mteb/cqadupstack-unix metrics: - type: main_score value: 43.159 - type: map_at_1 value: 27.3 - type: map_at_10 value: 37.471 - type: map_at_100 value: 38.847 - type: map_at_1000 value: 38.95 - type: map_at_20 value: 38.238 - type: map_at_3 value: 34.343 - type: map_at_5 value: 35.731 - type: mrr_at_1 value: 32.46268656716418 - type: mrr_at_10 value: 41.2610681710495 - type: mrr_at_100 value: 42.31204325599387 - type: mrr_at_1000 value: 42.36679795654589 - type: mrr_at_20 value: 41.91619388699724 - type: mrr_at_3 value: 38.75932835820893 - type: mrr_at_5 value: 39.86473880597011 - type: nauc_map_at_1000_diff1 value: 38.41373782401318 - type: nauc_map_at_1000_max value: 36.26799123342812 - type: nauc_map_at_1000_std value: -1.6362179601270626 - type: nauc_map_at_100_diff1 value: 38.39090799924858 - type: nauc_map_at_100_max value: 36.25156851372077 - type: nauc_map_at_100_std value: -1.6616566136661386 - type: nauc_map_at_10_diff1 value: 38.696198873516394 - type: nauc_map_at_10_max value: 36.16128806023898 - type: nauc_map_at_10_std value: -2.130791362526739 - type: nauc_map_at_1_diff1 value: 43.37285405382472 - type: nauc_map_at_1_max value: 31.642684614767276 - type: nauc_map_at_1_std value: -5.540771932525815 - type: nauc_map_at_20_diff1 value: 38.4878820103522 - type: nauc_map_at_20_max value: 36.2292631214085 - type: nauc_map_at_20_std value: -1.843545810350474 - type: nauc_map_at_3_diff1 value: 39.95370528114123 - type: nauc_map_at_3_max value: 34.23886393938281 - type: nauc_map_at_3_std value: -4.809386044288499 - type: nauc_map_at_5_diff1 value: 39.16911049006275 - type: nauc_map_at_5_max value: 35.2191378957676 - type: nauc_map_at_5_std value: -3.5005953042339653 - type: nauc_mrr_at_1000_diff1 value: 36.18437691856584 - type: nauc_mrr_at_1000_max value: 37.70260458140709 - type: nauc_mrr_at_1000_std value: -0.11426638420398647 - type: nauc_mrr_at_100_diff1 value: 36.14459892119925 - type: nauc_mrr_at_100_max value: 37.67599501718511 - type: nauc_mrr_at_100_std value: -0.12595268985648042 - type: nauc_mrr_at_10_diff1 value: 36.27951441810734 - type: nauc_mrr_at_10_max value: 37.8613553159427 - type: nauc_mrr_at_10_std value: -0.20119672771815636 - type: nauc_mrr_at_1_diff1 value: 39.88957563396634 - type: nauc_mrr_at_1_max value: 36.60257391624788 - type: nauc_mrr_at_1_std value: -2.841791526837743 - type: nauc_mrr_at_20_diff1 value: 36.103682088537184 - type: nauc_mrr_at_20_max value: 37.7488620725621 - type: nauc_mrr_at_20_std value: -0.08430051028531159 - type: nauc_mrr_at_3_diff1 value: 36.8623555151354 - type: nauc_mrr_at_3_max value: 37.36094868304334 - type: nauc_mrr_at_3_std value: -1.8677123750289681 - type: nauc_mrr_at_5_diff1 value: 36.59448977125284 - type: nauc_mrr_at_5_max value: 37.8339219805799 - type: nauc_mrr_at_5_std value: -1.1457629757330636 - type: nauc_ndcg_at_1000_diff1 value: 36.0265401975872 - type: nauc_ndcg_at_1000_max value: 37.45317419626094 - type: nauc_ndcg_at_1000_std value: 1.6036524279201412 - type: nauc_ndcg_at_100_diff1 value: 35.29323774869888 - type: nauc_ndcg_at_100_max value: 37.10845147549922 - type: nauc_ndcg_at_100_std value: 1.5535678575966316 - type: nauc_ndcg_at_10_diff1 value: 36.180138557973876 - type: nauc_ndcg_at_10_max value: 37.411761737356 - type: nauc_ndcg_at_10_std value: 0.5165748912750906 - type: nauc_ndcg_at_1_diff1 value: 39.88957563396634 - type: nauc_ndcg_at_1_max value: 36.60257391624788 - type: nauc_ndcg_at_1_std value: -2.841791526837743 - type: nauc_ndcg_at_20_diff1 value: 35.610864685140825 - type: nauc_ndcg_at_20_max value: 37.34524578592843 - type: nauc_ndcg_at_20_std value: 1.2655274462928487 - type: nauc_ndcg_at_3_diff1 value: 37.581668636974165 - type: nauc_ndcg_at_3_max value: 35.69832549832282 - type: nauc_ndcg_at_3_std value: -3.6448795589159597 - type: nauc_ndcg_at_5_diff1 value: 36.991833727312326 - type: nauc_ndcg_at_5_max value: 36.30298998287306 - type: nauc_ndcg_at_5_std value: -2.2622164190279284 - type: nauc_precision_at_1000_diff1 value: -13.70160706618185 - type: nauc_precision_at_1000_max value: 5.460656856541568 - type: nauc_precision_at_1000_std value: 12.62097770232154 - type: nauc_precision_at_100_diff1 value: -6.499308352387093 - type: nauc_precision_at_100_max value: 20.353724456457627 - type: nauc_precision_at_100_std value: 17.95090127045124 - type: nauc_precision_at_10_diff1 value: 12.806170332318779 - type: nauc_precision_at_10_max value: 39.23341562181076 - type: nauc_precision_at_10_std value: 13.987639599584101 - type: nauc_precision_at_1_diff1 value: 39.88957563396634 - type: nauc_precision_at_1_max value: 36.60257391624788 - type: nauc_precision_at_1_std value: -2.841791526837743 - type: nauc_precision_at_20_diff1 value: 5.332995851304963 - type: nauc_precision_at_20_max value: 34.53916605109217 - type: nauc_precision_at_20_std value: 17.69712324408807 - type: nauc_precision_at_3_diff1 value: 25.4753762848704 - type: nauc_precision_at_3_max value: 38.50846428335884 - type: nauc_precision_at_3_std value: 1.9628066846211143 - type: nauc_precision_at_5_diff1 value: 20.301539878169468 - type: nauc_precision_at_5_max value: 39.951735387397015 - type: nauc_precision_at_5_std value: 6.793501039657923 - type: nauc_recall_at_1000_diff1 value: 22.203986100593294 - type: nauc_recall_at_1000_max value: 48.075623185241035 - type: nauc_recall_at_1000_std value: 50.75401433229061 - type: nauc_recall_at_100_diff1 value: 19.9395981394293 - type: nauc_recall_at_100_max value: 32.7460953002592 - type: nauc_recall_at_100_std value: 14.324454594663344 - type: nauc_recall_at_10_diff1 value: 29.346896169220532 - type: nauc_recall_at_10_max value: 36.68032432093537 - type: nauc_recall_at_10_std value: 6.713544668966169 - type: nauc_recall_at_1_diff1 value: 43.37285405382472 - type: nauc_recall_at_1_max value: 31.642684614767276 - type: nauc_recall_at_1_std value: -5.540771932525815 - type: nauc_recall_at_20_diff1 value: 26.448398177241923 - type: nauc_recall_at_20_max value: 35.7341164237797 - type: nauc_recall_at_20_std value: 9.476528957596907 - type: nauc_recall_at_3_diff1 value: 35.01162975513612 - type: nauc_recall_at_3_max value: 32.6400708326739 - type: nauc_recall_at_3_std value: -5.272713547738796 - type: nauc_recall_at_5_diff1 value: 32.78451551990977 - type: nauc_recall_at_5_max value: 34.48194190611458 - type: nauc_recall_at_5_std value: -1.532660412317024 - type: ndcg_at_1 value: 32.462999999999994 - type: ndcg_at_10 value: 43.159 - type: ndcg_at_100 value: 49.052 - type: ndcg_at_1000 value: 51.132 - type: ndcg_at_20 value: 45.57 - type: ndcg_at_3 value: 37.735 - type: ndcg_at_5 value: 39.556000000000004 - type: precision_at_1 value: 32.462999999999994 - type: precision_at_10 value: 7.593 - type: precision_at_100 value: 1.187 - type: precision_at_1000 value: 0.147 - type: precision_at_20 value: 4.515000000000001 - type: precision_at_3 value: 17.32 - type: precision_at_5 value: 11.996 - type: recall_at_1 value: 27.3 - type: recall_at_10 value: 56.642 - type: recall_at_100 value: 81.525 - type: recall_at_1000 value: 95.806 - type: recall_at_20 value: 65.11099999999999 - type: recall_at_3 value: 41.469 - type: recall_at_5 value: 46.163 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackWebmastersRetrieval (default) revision: 160c094312a0e1facb97e55eeddb698c0abe3571 split: test type: mteb/cqadupstack-webmasters metrics: - type: main_score value: 41.297 - type: map_at_1 value: 24.239 - type: map_at_10 value: 34.958 - type: map_at_100 value: 36.758 - type: map_at_1000 value: 36.989 - type: map_at_20 value: 35.9 - type: map_at_3 value: 31.673000000000002 - type: map_at_5 value: 33.675 - type: mrr_at_1 value: 28.458498023715418 - type: mrr_at_10 value: 39.1959972394755 - type: mrr_at_100 value: 40.18847772327865 - type: mrr_at_1000 value: 40.23773735721479 - type: mrr_at_20 value: 39.774954617486664 - type: mrr_at_3 value: 36.29776021080371 - type: mrr_at_5 value: 38.155467720685124 - type: nauc_map_at_1000_diff1 value: 36.874966531359455 - type: nauc_map_at_1000_max value: 32.925100088208865 - type: nauc_map_at_1000_std value: 2.9838945484541397 - type: nauc_map_at_100_diff1 value: 37.05086874383793 - type: nauc_map_at_100_max value: 32.93389217056399 - type: nauc_map_at_100_std value: 2.9144257345054467 - type: nauc_map_at_10_diff1 value: 37.37730997185654 - type: nauc_map_at_10_max value: 32.06774672790129 - type: nauc_map_at_10_std value: 1.8429020159075118 - type: nauc_map_at_1_diff1 value: 43.561442228636224 - type: nauc_map_at_1_max value: 30.139564573234708 - type: nauc_map_at_1_std value: -4.277872763081037 - type: nauc_map_at_20_diff1 value: 37.35659896740046 - type: nauc_map_at_20_max value: 32.5517336397131 - type: nauc_map_at_20_std value: 2.4435488993754753 - type: nauc_map_at_3_diff1 value: 37.473068855167796 - type: nauc_map_at_3_max value: 31.59112945346696 - type: nauc_map_at_3_std value: 1.6092320432651679 - type: nauc_map_at_5_diff1 value: 38.01435766254902 - type: nauc_map_at_5_max value: 31.922726165398448 - type: nauc_map_at_5_std value: 1.5628995834664126 - type: nauc_mrr_at_1000_diff1 value: 36.12153078238164 - type: nauc_mrr_at_1000_max value: 32.22061637683918 - type: nauc_mrr_at_1000_std value: 3.5366279283280204 - type: nauc_mrr_at_100_diff1 value: 36.1188638601524 - type: nauc_mrr_at_100_max value: 32.20952783773952 - type: nauc_mrr_at_100_std value: 3.53367306991397 - type: nauc_mrr_at_10_diff1 value: 36.24545839812258 - type: nauc_mrr_at_10_max value: 32.02781480404713 - type: nauc_mrr_at_10_std value: 3.2237214571781756 - type: nauc_mrr_at_1_diff1 value: 39.43286392756712 - type: nauc_mrr_at_1_max value: 31.633687884951282 - type: nauc_mrr_at_1_std value: -0.27764785706458617 - type: nauc_mrr_at_20_diff1 value: 36.13109306056499 - type: nauc_mrr_at_20_max value: 32.24845518587824 - type: nauc_mrr_at_20_std value: 3.5527414665451666 - type: nauc_mrr_at_3_diff1 value: 35.694561950510526 - type: nauc_mrr_at_3_max value: 32.34372143829758 - type: nauc_mrr_at_3_std value: 3.2831272959106585 - type: nauc_mrr_at_5_diff1 value: 36.47407512657775 - type: nauc_mrr_at_5_max value: 32.245870671508776 - type: nauc_mrr_at_5_std value: 3.22208428921342 - type: nauc_ndcg_at_1000_diff1 value: 34.94498046337473 - type: nauc_ndcg_at_1000_max value: 34.00274024637903 - type: nauc_ndcg_at_1000_std value: 6.037949553908587 - type: nauc_ndcg_at_100_diff1 value: 35.072050841400085 - type: nauc_ndcg_at_100_max value: 33.77530570776558 - type: nauc_ndcg_at_100_std value: 6.476581022520731 - type: nauc_ndcg_at_10_diff1 value: 35.17349390611081 - type: nauc_ndcg_at_10_max value: 31.74636270166893 - type: nauc_ndcg_at_10_std value: 3.825524111282119 - type: nauc_ndcg_at_1_diff1 value: 39.43286392756712 - type: nauc_ndcg_at_1_max value: 31.633687884951282 - type: nauc_ndcg_at_1_std value: -0.27764785706458617 - type: nauc_ndcg_at_20_diff1 value: 35.390740077541615 - type: nauc_ndcg_at_20_max value: 33.026639113297236 - type: nauc_ndcg_at_20_std value: 5.161352869053754 - type: nauc_ndcg_at_3_diff1 value: 34.784893735708245 - type: nauc_ndcg_at_3_max value: 32.37270211170083 - type: nauc_ndcg_at_3_std value: 4.002202121330911 - type: nauc_ndcg_at_5_diff1 value: 36.13692738773426 - type: nauc_ndcg_at_5_max value: 32.39302727687912 - type: nauc_ndcg_at_5_std value: 3.6394963350009557 - type: nauc_precision_at_1000_diff1 value: -22.965728978391358 - type: nauc_precision_at_1000_max value: 0.991199838881063 - type: nauc_precision_at_1000_std value: 9.683600354073281 - type: nauc_precision_at_100_diff1 value: -12.560615008152604 - type: nauc_precision_at_100_max value: 14.334232028365351 - type: nauc_precision_at_100_std value: 17.640635157521896 - type: nauc_precision_at_10_diff1 value: 11.206224974161422 - type: nauc_precision_at_10_max value: 27.636112811297668 - type: nauc_precision_at_10_std value: 13.844463698498116 - type: nauc_precision_at_1_diff1 value: 39.43286392756712 - type: nauc_precision_at_1_max value: 31.633687884951282 - type: nauc_precision_at_1_std value: -0.27764785706458617 - type: nauc_precision_at_20_diff1 value: 2.820754738992166 - type: nauc_precision_at_20_max value: 24.548395859228734 - type: nauc_precision_at_20_std value: 18.659599473117076 - type: nauc_precision_at_3_diff1 value: 23.700089513000997 - type: nauc_precision_at_3_max value: 33.926097310453535 - type: nauc_precision_at_3_std value: 11.528262810991514 - type: nauc_precision_at_5_diff1 value: 19.952717493700728 - type: nauc_precision_at_5_max value: 32.73420866701224 - type: nauc_precision_at_5_std value: 12.572768747428661 - type: nauc_recall_at_1000_diff1 value: 11.338759676414469 - type: nauc_recall_at_1000_max value: 60.27964607676189 - type: nauc_recall_at_1000_std value: 65.64600084082495 - type: nauc_recall_at_100_diff1 value: 25.61894545906908 - type: nauc_recall_at_100_max value: 33.053374407768985 - type: nauc_recall_at_100_std value: 25.841122943882226 - type: nauc_recall_at_10_diff1 value: 28.374429812654515 - type: nauc_recall_at_10_max value: 25.752628109349747 - type: nauc_recall_at_10_std value: 5.460179271646167 - type: nauc_recall_at_1_diff1 value: 43.561442228636224 - type: nauc_recall_at_1_max value: 30.139564573234708 - type: nauc_recall_at_1_std value: -4.277872763081037 - type: nauc_recall_at_20_diff1 value: 28.096290358405124 - type: nauc_recall_at_20_max value: 29.47090259172014 - type: nauc_recall_at_20_std value: 10.769843783609222 - type: nauc_recall_at_3_diff1 value: 31.442770007498943 - type: nauc_recall_at_3_max value: 30.06591425778911 - type: nauc_recall_at_3_std value: 5.640565369414359 - type: nauc_recall_at_5_diff1 value: 32.95442456012167 - type: nauc_recall_at_5_max value: 28.473408007093287 - type: nauc_recall_at_5_std value: 4.467307517857356 - type: ndcg_at_1 value: 28.458 - type: ndcg_at_10 value: 41.297 - type: ndcg_at_100 value: 47.797 - type: ndcg_at_1000 value: 50.059 - type: ndcg_at_20 value: 43.856 - type: ndcg_at_3 value: 35.656 - type: ndcg_at_5 value: 38.779 - type: precision_at_1 value: 28.458 - type: precision_at_10 value: 8.004 - type: precision_at_100 value: 1.68 - type: precision_at_1000 value: 0.254 - type: precision_at_20 value: 5.1979999999999995 - type: precision_at_3 value: 16.864 - type: precision_at_5 value: 12.727 - type: recall_at_1 value: 24.239 - type: recall_at_10 value: 54.26499999999999 - type: recall_at_100 value: 82.101 - type: recall_at_1000 value: 95.994 - type: recall_at_20 value: 63.649 - type: recall_at_3 value: 39.206 - type: recall_at_5 value: 46.604 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackWordpressRetrieval (default) revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 split: test type: mteb/cqadupstack-wordpress metrics: - type: main_score value: 33.832 - type: map_at_1 value: 20.003 - type: map_at_10 value: 28.642 - type: map_at_100 value: 29.839 - type: map_at_1000 value: 29.958000000000002 - type: map_at_20 value: 29.349999999999998 - type: map_at_3 value: 25.535000000000004 - type: map_at_5 value: 27.400000000000002 - type: mrr_at_1 value: 21.811460258780038 - type: mrr_at_10 value: 30.64372267699438 - type: mrr_at_100 value: 31.73852756929626 - type: mrr_at_1000 value: 31.815679800702124 - type: mrr_at_20 value: 31.299814331394977 - type: mrr_at_3 value: 28.034504004929133 - type: mrr_at_5 value: 29.624152803450386 - type: nauc_map_at_1000_diff1 value: 27.06604986940547 - type: nauc_map_at_1000_max value: 30.733957522966943 - type: nauc_map_at_1000_std value: 5.707667228631227 - type: nauc_map_at_100_diff1 value: 27.069243825104223 - type: nauc_map_at_100_max value: 30.697115186434498 - type: nauc_map_at_100_std value: 5.694181242130099 - type: nauc_map_at_10_diff1 value: 27.287579843894623 - type: nauc_map_at_10_max value: 30.416063160658435 - type: nauc_map_at_10_std value: 5.001223879129747 - type: nauc_map_at_1_diff1 value: 34.766915543377216 - type: nauc_map_at_1_max value: 31.68917356389495 - type: nauc_map_at_1_std value: 4.63364234029443 - type: nauc_map_at_20_diff1 value: 26.988320257908132 - type: nauc_map_at_20_max value: 30.59211599414314 - type: nauc_map_at_20_std value: 5.5155380180944205 - type: nauc_map_at_3_diff1 value: 29.76134767596865 - type: nauc_map_at_3_max value: 30.006661638483646 - type: nauc_map_at_3_std value: 5.008441487743006 - type: nauc_map_at_5_diff1 value: 27.996100405808832 - type: nauc_map_at_5_max value: 29.688313272752787 - type: nauc_map_at_5_std value: 4.989016460968753 - type: nauc_mrr_at_1000_diff1 value: 27.026456330287647 - type: nauc_mrr_at_1000_max value: 32.20560453205575 - type: nauc_mrr_at_1000_std value: 5.546537805647763 - type: nauc_mrr_at_100_diff1 value: 27.02363656101913 - type: nauc_mrr_at_100_max value: 32.196640350864875 - type: nauc_mrr_at_100_std value: 5.518951009854468 - type: nauc_mrr_at_10_diff1 value: 27.00915074052057 - type: nauc_mrr_at_10_max value: 32.001757674286736 - type: nauc_mrr_at_10_std value: 5.072731140770552 - type: nauc_mrr_at_1_diff1 value: 34.845574431208064 - type: nauc_mrr_at_1_max value: 34.49816690982053 - type: nauc_mrr_at_1_std value: 5.41968335382083 - type: nauc_mrr_at_20_diff1 value: 26.84248184004788 - type: nauc_mrr_at_20_max value: 32.07749657503134 - type: nauc_mrr_at_20_std value: 5.441862772046592 - type: nauc_mrr_at_3_diff1 value: 29.505028408787286 - type: nauc_mrr_at_3_max value: 32.86798316031229 - type: nauc_mrr_at_3_std value: 4.8203112022331895 - type: nauc_mrr_at_5_diff1 value: 27.569200794858347 - type: nauc_mrr_at_5_max value: 31.645968122456676 - type: nauc_mrr_at_5_std value: 4.9761558440226334 - type: nauc_ndcg_at_1000_diff1 value: 23.88608662925394 - type: nauc_ndcg_at_1000_max value: 31.479699528948128 - type: nauc_ndcg_at_1000_std value: 7.525533932973386 - type: nauc_ndcg_at_100_diff1 value: 23.604927980525353 - type: nauc_ndcg_at_100_max value: 31.17191900574252 - type: nauc_ndcg_at_100_std value: 7.378799400265408 - type: nauc_ndcg_at_10_diff1 value: 23.62553795593263 - type: nauc_ndcg_at_10_max value: 30.338342936369855 - type: nauc_ndcg_at_10_std value: 4.953463227590225 - type: nauc_ndcg_at_1_diff1 value: 34.845574431208064 - type: nauc_ndcg_at_1_max value: 34.49816690982053 - type: nauc_ndcg_at_1_std value: 5.41968335382083 - type: nauc_ndcg_at_20_diff1 value: 22.70534696949328 - type: nauc_ndcg_at_20_max value: 30.65105372647873 - type: nauc_ndcg_at_20_std value: 6.53760429235338 - type: nauc_ndcg_at_3_diff1 value: 28.25657951417217 - type: nauc_ndcg_at_3_max value: 30.680157197390457 - type: nauc_ndcg_at_3_std value: 5.055222325853256 - type: nauc_ndcg_at_5_diff1 value: 25.051860390557394 - type: nauc_ndcg_at_5_max value: 29.321858649131254 - type: nauc_ndcg_at_5_std value: 5.084862957981524 - type: nauc_precision_at_1000_diff1 value: -13.576500693637458 - type: nauc_precision_at_1000_max value: 14.89365822136165 - type: nauc_precision_at_1000_std value: 2.561285129665411 - type: nauc_precision_at_100_diff1 value: -0.5356717684025231 - type: nauc_precision_at_100_max value: 24.869247406121847 - type: nauc_precision_at_100_std value: 11.862364821181174 - type: nauc_precision_at_10_diff1 value: 9.30582069803439 - type: nauc_precision_at_10_max value: 30.923589035628858 - type: nauc_precision_at_10_std value: 5.398201031301567 - type: nauc_precision_at_1_diff1 value: 34.845574431208064 - type: nauc_precision_at_1_max value: 34.49816690982053 - type: nauc_precision_at_1_std value: 5.41968335382083 - type: nauc_precision_at_20_diff1 value: 4.059202632041108 - type: nauc_precision_at_20_max value: 30.894783817339388 - type: nauc_precision_at_20_std value: 10.754015672232937 - type: nauc_precision_at_3_diff1 value: 22.522340581277586 - type: nauc_precision_at_3_max value: 32.5100154377477 - type: nauc_precision_at_3_std value: 6.522378511380731 - type: nauc_precision_at_5_diff1 value: 14.85331547455788 - type: nauc_precision_at_5_max value: 30.095138012218207 - type: nauc_precision_at_5_std value: 5.64736160156279 - type: nauc_recall_at_1000_diff1 value: -5.27332415745275 - type: nauc_recall_at_1000_max value: 33.16050370515966 - type: nauc_recall_at_1000_std value: 35.48080153587079 - type: nauc_recall_at_100_diff1 value: 9.401206314932313 - type: nauc_recall_at_100_max value: 29.15466334221412 - type: nauc_recall_at_100_std value: 15.925857450796613 - type: nauc_recall_at_10_diff1 value: 13.2795134118332 - type: nauc_recall_at_10_max value: 27.4224535997482 - type: nauc_recall_at_10_std value: 4.610601473431608 - type: nauc_recall_at_1_diff1 value: 34.766915543377216 - type: nauc_recall_at_1_max value: 31.68917356389495 - type: nauc_recall_at_1_std value: 4.63364234029443 - type: nauc_recall_at_20_diff1 value: 9.041251274906765 - type: nauc_recall_at_20_max value: 27.833816433039633 - type: nauc_recall_at_20_std value: 10.004419902367824 - type: nauc_recall_at_3_diff1 value: 24.312698907863194 - type: nauc_recall_at_3_max value: 28.445506445251507 - type: nauc_recall_at_3_std value: 4.954049242639007 - type: nauc_recall_at_5_diff1 value: 17.32213286984511 - type: nauc_recall_at_5_max value: 25.38081324317843 - type: nauc_recall_at_5_std value: 5.5041998917365875 - type: ndcg_at_1 value: 21.811 - type: ndcg_at_10 value: 33.832 - type: ndcg_at_100 value: 39.675 - type: ndcg_at_1000 value: 42.22 - type: ndcg_at_20 value: 36.246 - type: ndcg_at_3 value: 28.09 - type: ndcg_at_5 value: 31.173000000000002 - type: precision_at_1 value: 21.811 - type: precision_at_10 value: 5.601 - type: precision_at_100 value: 0.9259999999999999 - type: precision_at_1000 value: 0.126 - type: precision_at_20 value: 3.383 - type: precision_at_3 value: 12.261 - type: precision_at_5 value: 9.168 - type: recall_at_1 value: 20.003 - type: recall_at_10 value: 47.754000000000005 - type: recall_at_100 value: 74.575 - type: recall_at_1000 value: 93.118 - type: recall_at_20 value: 56.887 - type: recall_at_3 value: 32.584999999999994 - type: recall_at_5 value: 39.987 task: type: Retrieval - dataset: config: default name: MTEB ClimateFEVER (default) revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 split: test type: mteb/climate-fever metrics: - type: main_score value: 17.039 - type: map_at_1 value: 5.803 - type: map_at_10 value: 11.043 - type: map_at_100 value: 12.575 - type: map_at_1000 value: 12.777 - type: map_at_20 value: 11.811 - type: map_at_3 value: 8.886 - type: map_at_5 value: 9.918000000000001 - type: mrr_at_1 value: 13.745928338762214 - type: mrr_at_10 value: 23.035856470709884 - type: mrr_at_100 value: 24.31853659598728 - type: mrr_at_1000 value: 24.381741087426335 - type: mrr_at_20 value: 23.826099691586087 - type: mrr_at_3 value: 19.641693811074926 - type: mrr_at_5 value: 21.462540716612363 - type: nauc_map_at_1000_diff1 value: 14.075861001605652 - type: nauc_map_at_1000_max value: 32.50540299753915 - type: nauc_map_at_1000_std value: 16.63440932146544 - type: nauc_map_at_100_diff1 value: 14.123217880033861 - type: nauc_map_at_100_max value: 32.41146114250095 - type: nauc_map_at_100_std value: 16.381092437163552 - type: nauc_map_at_10_diff1 value: 14.311158236352378 - type: nauc_map_at_10_max value: 31.663465768164695 - type: nauc_map_at_10_std value: 14.79004698936343 - type: nauc_map_at_1_diff1 value: 19.79138559884689 - type: nauc_map_at_1_max value: 28.547089859427256 - type: nauc_map_at_1_std value: 8.354323364048897 - type: nauc_map_at_20_diff1 value: 14.23157436194119 - type: nauc_map_at_20_max value: 31.829323482590056 - type: nauc_map_at_20_std value: 15.728487414739051 - type: nauc_map_at_3_diff1 value: 16.977223379413818 - type: nauc_map_at_3_max value: 30.91365779223932 - type: nauc_map_at_3_std value: 11.004013141499629 - type: nauc_map_at_5_diff1 value: 15.287594960641249 - type: nauc_map_at_5_max value: 31.4855930880175 - type: nauc_map_at_5_std value: 13.035283381488696 - type: nauc_mrr_at_1000_diff1 value: 15.19673911397727 - type: nauc_mrr_at_1000_max value: 32.27281360470439 - type: nauc_mrr_at_1000_std value: 17.982389020371247 - type: nauc_mrr_at_100_diff1 value: 15.20130608016213 - type: nauc_mrr_at_100_max value: 32.26452236374923 - type: nauc_mrr_at_100_std value: 18.002716153000627 - type: nauc_mrr_at_10_diff1 value: 15.1395436352143 - type: nauc_mrr_at_10_max value: 32.2082338876027 - type: nauc_mrr_at_10_std value: 17.838618343534538 - type: nauc_mrr_at_1_diff1 value: 18.75801367430074 - type: nauc_mrr_at_1_max value: 29.522146316819242 - type: nauc_mrr_at_1_std value: 12.322203243653137 - type: nauc_mrr_at_20_diff1 value: 15.15341535844078 - type: nauc_mrr_at_20_max value: 32.149592234561844 - type: nauc_mrr_at_20_std value: 18.05739998762671 - type: nauc_mrr_at_3_diff1 value: 16.713713609700807 - type: nauc_mrr_at_3_max value: 31.073485728698007 - type: nauc_mrr_at_3_std value: 14.685962809871809 - type: nauc_mrr_at_5_diff1 value: 15.20715918092642 - type: nauc_mrr_at_5_max value: 31.91496536532501 - type: nauc_mrr_at_5_std value: 16.32734830238726 - type: nauc_ndcg_at_1000_diff1 value: 11.479850187724962 - type: nauc_ndcg_at_1000_max value: 36.373357396933336 - type: nauc_ndcg_at_1000_std value: 25.683568919701514 - type: nauc_ndcg_at_100_diff1 value: 11.559236662789004 - type: nauc_ndcg_at_100_max value: 34.90591391147814 - type: nauc_ndcg_at_100_std value: 23.057965705153865 - type: nauc_ndcg_at_10_diff1 value: 12.544873936559739 - type: nauc_ndcg_at_10_max value: 32.83663980495747 - type: nauc_ndcg_at_10_std value: 19.238337421544006 - type: nauc_ndcg_at_1_diff1 value: 18.75801367430074 - type: nauc_ndcg_at_1_max value: 29.522146316819242 - type: nauc_ndcg_at_1_std value: 12.322203243653137 - type: nauc_ndcg_at_20_diff1 value: 12.361691440408999 - type: nauc_ndcg_at_20_max value: 32.76528016217301 - type: nauc_ndcg_at_20_std value: 21.05644041370163 - type: nauc_ndcg_at_3_diff1 value: 15.96832626064785 - type: nauc_ndcg_at_3_max value: 31.44063980122053 - type: nauc_ndcg_at_3_std value: 12.936413672431879 - type: nauc_ndcg_at_5_diff1 value: 13.713513523433363 - type: nauc_ndcg_at_5_max value: 32.36446493796571 - type: nauc_ndcg_at_5_std value: 15.76174671030862 - type: nauc_precision_at_1000_diff1 value: 1.34184180265124 - type: nauc_precision_at_1000_max value: 23.682869852931777 - type: nauc_precision_at_1000_std value: 30.11200034478147 - type: nauc_precision_at_100_diff1 value: 2.7429761602552545 - type: nauc_precision_at_100_max value: 27.799306591674704 - type: nauc_precision_at_100_std value: 28.63347129016136 - type: nauc_precision_at_10_diff1 value: 6.809778511951629 - type: nauc_precision_at_10_max value: 31.549319057696106 - type: nauc_precision_at_10_std value: 27.797058851811858 - type: nauc_precision_at_1_diff1 value: 18.75801367430074 - type: nauc_precision_at_1_max value: 29.522146316819242 - type: nauc_precision_at_1_std value: 12.322203243653137 - type: nauc_precision_at_20_diff1 value: 6.652188021122349 - type: nauc_precision_at_20_max value: 28.77994242287882 - type: nauc_precision_at_20_std value: 29.36148470486359 - type: nauc_precision_at_3_diff1 value: 14.712258200325262 - type: nauc_precision_at_3_max value: 32.0031289884313 - type: nauc_precision_at_3_std value: 16.234492809427824 - type: nauc_precision_at_5_diff1 value: 9.304224063707855 - type: nauc_precision_at_5_max value: 33.271254238986785 - type: nauc_precision_at_5_std value: 21.406431386266583 - type: nauc_recall_at_1000_diff1 value: 0.25017014524490533 - type: nauc_recall_at_1000_max value: 38.04838759534934 - type: nauc_recall_at_1000_std value: 40.585116054127084 - type: nauc_recall_at_100_diff1 value: 3.0042658139447593 - type: nauc_recall_at_100_max value: 30.765498421078746 - type: nauc_recall_at_100_std value: 26.344845670411445 - type: nauc_recall_at_10_diff1 value: 7.066884715859835 - type: nauc_recall_at_10_max value: 30.432215207137187 - type: nauc_recall_at_10_std value: 21.88096446786714 - type: nauc_recall_at_1_diff1 value: 19.79138559884689 - type: nauc_recall_at_1_max value: 28.547089859427256 - type: nauc_recall_at_1_std value: 8.354323364048897 - type: nauc_recall_at_20_diff1 value: 6.260902435695847 - type: nauc_recall_at_20_max value: 28.085979713597105 - type: nauc_recall_at_20_std value: 24.042661661305686 - type: nauc_recall_at_3_diff1 value: 13.85539993127579 - type: nauc_recall_at_3_max value: 30.66048968710386 - type: nauc_recall_at_3_std value: 12.49456367200617 - type: nauc_recall_at_5_diff1 value: 9.732734054264856 - type: nauc_recall_at_5_max value: 31.558450901307765 - type: nauc_recall_at_5_std value: 16.85189341873968 - type: ndcg_at_1 value: 13.746 - type: ndcg_at_10 value: 17.039 - type: ndcg_at_100 value: 24.251 - type: ndcg_at_1000 value: 28.407 - type: ndcg_at_20 value: 19.596 - type: ndcg_at_3 value: 12.825000000000001 - type: ndcg_at_5 value: 14.246 - type: precision_at_1 value: 13.746 - type: precision_at_10 value: 5.772 - type: precision_at_100 value: 1.345 - type: precision_at_1000 value: 0.211 - type: precision_at_20 value: 3.945 - type: precision_at_3 value: 9.815 - type: precision_at_5 value: 7.9479999999999995 - type: recall_at_1 value: 5.803 - type: recall_at_10 value: 22.320999999999998 - type: recall_at_100 value: 48.083 - type: recall_at_1000 value: 72.02199999999999 - type: recall_at_20 value: 29.693 - type: recall_at_3 value: 12.249 - type: recall_at_5 value: 15.925 task: type: Retrieval - dataset: config: default name: MTEB DBPedia (default) revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 split: dev type: mteb/dbpedia metrics: - type: main_score value: 32.056000000000004 - type: map_at_1 value: 6.159 - type: map_at_10 value: 14.749 - type: map_at_100 value: 19.639 - type: map_at_1000 value: 20.902 - type: map_at_20 value: 16.320999999999998 - type: map_at_3 value: 10.598 - type: map_at_5 value: 12.232 - type: mrr_at_1 value: 55.223880597014926 - type: mrr_at_10 value: 62.61194029850745 - type: mrr_at_100 value: 63.161403712876506 - type: mrr_at_1000 value: 63.19045792845882 - type: mrr_at_20 value: 62.883310719131615 - type: mrr_at_3 value: 60.69651741293532 - type: mrr_at_5 value: 62.11442786069652 - type: nauc_map_at_1000_diff1 value: 14.66774420808803 - type: nauc_map_at_1000_max value: 15.83673037882429 - type: nauc_map_at_1000_std value: 5.780706425119277 - type: nauc_map_at_100_diff1 value: 15.155295918719005 - type: nauc_map_at_100_max value: 14.072328538517597 - type: nauc_map_at_100_std value: 1.770766267850918 - type: nauc_map_at_10_diff1 value: 14.98606047237713 - type: nauc_map_at_10_max value: 6.187697786036102 - type: nauc_map_at_10_std value: -6.360564380948436 - type: nauc_map_at_1_diff1 value: 28.930347534146826 - type: nauc_map_at_1_max value: -22.946473489514478 - type: nauc_map_at_1_std value: -5.478885357489002 - type: nauc_map_at_20_diff1 value: 13.225096470295508 - type: nauc_map_at_20_max value: 8.134020886577796 - type: nauc_map_at_20_std value: -3.688087435394214 - type: nauc_map_at_3_diff1 value: 23.967211717665325 - type: nauc_map_at_3_max value: -8.54757081540758 - type: nauc_map_at_3_std value: -11.854904684666472 - type: nauc_map_at_5_diff1 value: 18.69169934915187 - type: nauc_map_at_5_max value: -2.681982078444763 - type: nauc_map_at_5_std value: -9.677458332091078 - type: nauc_mrr_at_1000_diff1 value: 29.1326144948668 - type: nauc_mrr_at_1000_max value: 30.32888792364254 - type: nauc_mrr_at_1000_std value: 0.10599054230879215 - type: nauc_mrr_at_100_diff1 value: 29.170245504786546 - type: nauc_mrr_at_100_max value: 30.334485258327394 - type: nauc_mrr_at_100_std value: 0.073017997841517 - type: nauc_mrr_at_10_diff1 value: 28.61680486143009 - type: nauc_mrr_at_10_max value: 29.886786779812883 - type: nauc_mrr_at_10_std value: -0.7804123870159443 - type: nauc_mrr_at_1_diff1 value: 28.445954358025144 - type: nauc_mrr_at_1_max value: 29.757901190323032 - type: nauc_mrr_at_1_std value: -8.160393754836772 - type: nauc_mrr_at_20_diff1 value: 28.970583921994 - type: nauc_mrr_at_20_max value: 30.297978189519036 - type: nauc_mrr_at_20_std value: 0.025099380605112777 - type: nauc_mrr_at_3_diff1 value: 26.922893811846027 - type: nauc_mrr_at_3_max value: 28.454141443011615 - type: nauc_mrr_at_3_std value: 0.5404188302638873 - type: nauc_mrr_at_5_diff1 value: 28.260050983821404 - type: nauc_mrr_at_5_max value: 30.662788723961064 - type: nauc_mrr_at_5_std value: 0.25695689627892465 - type: nauc_ndcg_at_1000_diff1 value: 17.550822161705184 - type: nauc_ndcg_at_1000_max value: 23.47112652887515 - type: nauc_ndcg_at_1000_std value: 18.921600629635876 - type: nauc_ndcg_at_100_diff1 value: 19.75836373321895 - type: nauc_ndcg_at_100_max value: 17.218863090775898 - type: nauc_ndcg_at_100_std value: 6.188418551354186 - type: nauc_ndcg_at_10_diff1 value: 17.018486303230613 - type: nauc_ndcg_at_10_max value: 13.172847759812633 - type: nauc_ndcg_at_10_std value: -2.557915936016456 - type: nauc_ndcg_at_1_diff1 value: 28.30450276152453 - type: nauc_ndcg_at_1_max value: 10.705906316378517 - type: nauc_ndcg_at_1_std value: -11.557649340196585 - type: nauc_ndcg_at_20_diff1 value: 17.248069262429212 - type: nauc_ndcg_at_20_max value: 9.849529486503714 - type: nauc_ndcg_at_20_std value: -0.311186473192 - type: nauc_ndcg_at_3_diff1 value: 24.91723384151359 - type: nauc_ndcg_at_3_max value: 12.746501741669613 - type: nauc_ndcg_at_3_std value: -7.478244755823778 - type: nauc_ndcg_at_5_diff1 value: 20.109109265849096 - type: nauc_ndcg_at_5_max value: 15.035625561879778 - type: nauc_ndcg_at_5_std value: -4.249792039203992 - type: nauc_precision_at_1000_diff1 value: -1.6904232746143093 - type: nauc_precision_at_1000_max value: 28.193351306594632 - type: nauc_precision_at_1000_std value: 49.466555012944426 - type: nauc_precision_at_100_diff1 value: 5.522098718336858 - type: nauc_precision_at_100_max value: 32.28843828110046 - type: nauc_precision_at_100_std value: 38.855283968708925 - type: nauc_precision_at_10_diff1 value: 0.8754832613076982 - type: nauc_precision_at_10_max value: 33.667753747656214 - type: nauc_precision_at_10_std value: 11.811992543860297 - type: nauc_precision_at_1_diff1 value: 28.445954358025144 - type: nauc_precision_at_1_max value: 29.757901190323032 - type: nauc_precision_at_1_std value: -8.160393754836772 - type: nauc_precision_at_20_diff1 value: 3.06814758548795 - type: nauc_precision_at_20_max value: 34.56674755156743 - type: nauc_precision_at_20_std value: 20.6096728873244 - type: nauc_precision_at_3_diff1 value: 14.003111174047799 - type: nauc_precision_at_3_max value: 26.627875058653572 - type: nauc_precision_at_3_std value: -1.576904767201546 - type: nauc_precision_at_5_diff1 value: 4.2565090179589 - type: nauc_precision_at_5_max value: 30.09262969886471 - type: nauc_precision_at_5_std value: 3.37613085747931 - type: nauc_recall_at_1000_diff1 value: 1.318125788778631 - type: nauc_recall_at_1000_max value: 23.347027051136532 - type: nauc_recall_at_1000_std value: 38.6734430950855 - type: nauc_recall_at_100_diff1 value: 16.260906677583385 - type: nauc_recall_at_100_max value: 13.269724794813417 - type: nauc_recall_at_100_std value: 12.528814958715534 - type: nauc_recall_at_10_diff1 value: 12.703144510602954 - type: nauc_recall_at_10_max value: 11.643752381855938 - type: nauc_recall_at_10_std value: -4.93246361539532 - type: nauc_recall_at_1_diff1 value: 28.930347534146826 - type: nauc_recall_at_1_max value: -22.946473489514478 - type: nauc_recall_at_1_std value: -5.478885357489002 - type: nauc_recall_at_20_diff1 value: 8.197159305821131 - type: nauc_recall_at_20_max value: 7.992590649067159 - type: nauc_recall_at_20_std value: 2.9499995677138213 - type: nauc_recall_at_3_diff1 value: 24.578598477208853 - type: nauc_recall_at_3_max value: -5.236092678096414 - type: nauc_recall_at_3_std value: -11.675071458967711 - type: nauc_recall_at_5_diff1 value: 16.448098064378637 - type: nauc_recall_at_5_max value: 0.5330990419735525 - type: nauc_recall_at_5_std value: -8.139497416808533 - type: ndcg_at_1 value: 44.03 - type: ndcg_at_10 value: 32.056000000000004 - type: ndcg_at_100 value: 36.813 - type: ndcg_at_1000 value: 44.603 - type: ndcg_at_20 value: 30.522 - type: ndcg_at_3 value: 37.112 - type: ndcg_at_5 value: 34.353 - type: precision_at_1 value: 55.224 - type: precision_at_10 value: 24.776 - type: precision_at_100 value: 7.5520000000000005 - type: precision_at_1000 value: 1.361 - type: precision_at_20 value: 16.418 - type: precision_at_3 value: 39.303 - type: precision_at_5 value: 32.537 - type: recall_at_1 value: 6.159 - type: recall_at_10 value: 18.767 - type: recall_at_100 value: 42.281 - type: recall_at_1000 value: 69.05799999999999 - type: recall_at_20 value: 24.176000000000002 - type: recall_at_3 value: 11.350999999999999 - type: recall_at_5 value: 13.858 task: type: Retrieval - dataset: config: default name: MTEB DBPedia (default) revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 split: test type: mteb/dbpedia metrics: - type: main_score value: 26.912000000000003 - type: map_at_1 value: 5.587000000000001 - type: map_at_10 value: 11.895 - type: map_at_100 value: 16.598 - type: map_at_1000 value: 17.812 - type: map_at_20 value: 13.679 - type: map_at_3 value: 8.475000000000001 - type: map_at_5 value: 9.864 - type: mrr_at_1 value: 43.5 - type: mrr_at_10 value: 55.057341269841274 - type: mrr_at_100 value: 55.7116763249525 - type: mrr_at_1000 value: 55.73114654313282 - type: mrr_at_20 value: 55.48448584654082 - type: mrr_at_3 value: 52.625000000000014 - type: mrr_at_5 value: 54.25 - type: nauc_map_at_1000_diff1 value: 9.222585526729162 - type: nauc_map_at_1000_max value: 6.474554295414542 - type: nauc_map_at_1000_std value: 19.530457057036873 - type: nauc_map_at_100_diff1 value: 9.11589434330461 - type: nauc_map_at_100_max value: 3.0466163667963415 - type: nauc_map_at_100_std value: 15.897958511594085 - type: nauc_map_at_10_diff1 value: 14.033788134812697 - type: nauc_map_at_10_max value: -4.180301152674868 - type: nauc_map_at_10_std value: 0.9565463929832962 - type: nauc_map_at_1_diff1 value: 24.88372785558863 - type: nauc_map_at_1_max value: -7.903205339084808 - type: nauc_map_at_1_std value: -9.025106405036246 - type: nauc_map_at_20_diff1 value: 10.982775028585708 - type: nauc_map_at_20_max value: -1.9814898263360956 - type: nauc_map_at_20_std value: 5.022744754836501 - type: nauc_map_at_3_diff1 value: 18.920355286487474 - type: nauc_map_at_3_max value: -7.008953507731825 - type: nauc_map_at_3_std value: -7.4302507983255035 - type: nauc_map_at_5_diff1 value: 15.649741124593813 - type: nauc_map_at_5_max value: -6.470264312561451 - type: nauc_map_at_5_std value: -3.629324410314669 - type: nauc_mrr_at_1000_diff1 value: 27.445673721571445 - type: nauc_mrr_at_1000_max value: 30.848234089089377 - type: nauc_mrr_at_1000_std value: 25.584068294106398 - type: nauc_mrr_at_100_diff1 value: 27.450436881700647 - type: nauc_mrr_at_100_max value: 30.86805059900003 - type: nauc_mrr_at_100_std value: 25.602642209980893 - type: nauc_mrr_at_10_diff1 value: 27.03642213567309 - type: nauc_mrr_at_10_max value: 30.564766924576997 - type: nauc_mrr_at_10_std value: 25.498766017362584 - type: nauc_mrr_at_1_diff1 value: 29.75429751366281 - type: nauc_mrr_at_1_max value: 27.603423680456483 - type: nauc_mrr_at_1_std value: 21.42210410437796 - type: nauc_mrr_at_20_diff1 value: 27.316158731916072 - type: nauc_mrr_at_20_max value: 30.788911987885836 - type: nauc_mrr_at_20_std value: 25.511317895342973 - type: nauc_mrr_at_3_diff1 value: 27.40278716897145 - type: nauc_mrr_at_3_max value: 30.17795110537535 - type: nauc_mrr_at_3_std value: 25.58949724207859 - type: nauc_mrr_at_5_diff1 value: 27.02036667526836 - type: nauc_mrr_at_5_max value: 30.56172819314359 - type: nauc_mrr_at_5_std value: 25.68106181481602 - type: nauc_ndcg_at_1000_diff1 value: 10.185469519827409 - type: nauc_ndcg_at_1000_max value: 19.97222766004627 - type: nauc_ndcg_at_1000_std value: 33.627761835495576 - type: nauc_ndcg_at_100_diff1 value: 10.34566025576966 - type: nauc_ndcg_at_100_max value: 10.904699675777103 - type: nauc_ndcg_at_100_std value: 25.902939752688408 - type: nauc_ndcg_at_10_diff1 value: 9.763184508901789 - type: nauc_ndcg_at_10_max value: 15.84272775012148 - type: nauc_ndcg_at_10_std value: 23.3644718110982 - type: nauc_ndcg_at_1_diff1 value: 27.53213753471065 - type: nauc_ndcg_at_1_max value: 25.47433464492357 - type: nauc_ndcg_at_1_std value: 22.927455121813495 - type: nauc_ndcg_at_20_diff1 value: 9.21982112040321 - type: nauc_ndcg_at_20_max value: 10.855015325947646 - type: nauc_ndcg_at_20_std value: 20.177684268239858 - type: nauc_ndcg_at_3_diff1 value: 16.480487570544028 - type: nauc_ndcg_at_3_max value: 23.08069150528697 - type: nauc_ndcg_at_3_std value: 24.654582040091068 - type: nauc_ndcg_at_5_diff1 value: 11.091173429430855 - type: nauc_ndcg_at_5_max value: 18.943964454340282 - type: nauc_ndcg_at_5_std value: 25.417120083736062 - type: nauc_precision_at_1000_diff1 value: 6.551654038336142 - type: nauc_precision_at_1000_max value: 49.44788048053744 - type: nauc_precision_at_1000_std value: 37.149790102492084 - type: nauc_precision_at_100_diff1 value: -3.0445732987998726 - type: nauc_precision_at_100_max value: 35.41813437751633 - type: nauc_precision_at_100_std value: 49.84817605241621 - type: nauc_precision_at_10_diff1 value: -1.9547816065045103 - type: nauc_precision_at_10_max value: 28.351986746022433 - type: nauc_precision_at_10_std value: 39.87641283361654 - type: nauc_precision_at_1_diff1 value: 29.75429751366281 - type: nauc_precision_at_1_max value: 27.603423680456483 - type: nauc_precision_at_1_std value: 21.42210410437796 - type: nauc_precision_at_20_diff1 value: -5.163004702977454 - type: nauc_precision_at_20_max value: 29.15244120857218 - type: nauc_precision_at_20_std value: 42.54647731285851 - type: nauc_precision_at_3_diff1 value: 8.549385971521142 - type: nauc_precision_at_3_max value: 28.050026369321174 - type: nauc_precision_at_3_std value: 31.253522187900256 - type: nauc_precision_at_5_diff1 value: -0.5560619777844419 - type: nauc_precision_at_5_max value: 25.400998256732244 - type: nauc_precision_at_5_std value: 36.987247493209956 - type: nauc_recall_at_1000_diff1 value: 6.3187672735979135 - type: nauc_recall_at_1000_max value: 11.452062079279093 - type: nauc_recall_at_1000_std value: 34.01011092060735 - type: nauc_recall_at_100_diff1 value: 10.256765153257081 - type: nauc_recall_at_100_max value: 1.3948375294116233 - type: nauc_recall_at_100_std value: 23.12466567545472 - type: nauc_recall_at_10_diff1 value: 9.429415669158093 - type: nauc_recall_at_10_max value: -7.19845910730243 - type: nauc_recall_at_10_std value: -0.7518308015760871 - type: nauc_recall_at_1_diff1 value: 24.88372785558863 - type: nauc_recall_at_1_max value: -7.903205339084808 - type: nauc_recall_at_1_std value: -9.025106405036246 - type: nauc_recall_at_20_diff1 value: 7.221922012299395 - type: nauc_recall_at_20_max value: -5.373839522647143 - type: nauc_recall_at_20_std value: 3.361400375981407 - type: nauc_recall_at_3_diff1 value: 15.217973070806593 - type: nauc_recall_at_3_max value: -8.255546494136766 - type: nauc_recall_at_3_std value: -8.0508215378933 - type: nauc_recall_at_5_diff1 value: 8.750499393201636 - type: nauc_recall_at_5_max value: -9.456361840780644 - type: nauc_recall_at_5_std value: -3.255892902113352 - type: ndcg_at_1 value: 32.25 - type: ndcg_at_10 value: 26.912000000000003 - type: ndcg_at_100 value: 31.477 - type: ndcg_at_1000 value: 38.78 - type: ndcg_at_20 value: 26.851999999999997 - type: ndcg_at_3 value: 29.343000000000004 - type: ndcg_at_5 value: 27.916 - type: precision_at_1 value: 43.5 - type: precision_at_10 value: 22.25 - type: precision_at_100 value: 7.3580000000000005 - type: precision_at_1000 value: 1.6549999999999998 - type: precision_at_20 value: 16.712 - type: precision_at_3 value: 33.667 - type: precision_at_5 value: 28.65 - type: recall_at_1 value: 5.587000000000001 - type: recall_at_10 value: 17.302999999999997 - type: recall_at_100 value: 39.731 - type: recall_at_1000 value: 63.751999999999995 - type: recall_at_20 value: 22.978 - type: recall_at_3 value: 9.884 - type: recall_at_5 value: 12.717999999999998 task: type: Retrieval - dataset: config: default name: MTEB FEVER (default) revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 split: dev type: mteb/fever metrics: - type: main_score value: 42.311 - type: map_at_1 value: 26.230999999999998 - type: map_at_10 value: 36.488 - type: map_at_100 value: 37.189 - type: map_at_1000 value: 37.238 - type: map_at_20 value: 36.908 - type: map_at_3 value: 33.711999999999996 - type: map_at_5 value: 35.363 - type: mrr_at_1 value: 28.5028502850285 - type: mrr_at_10 value: 39.190341653212904 - type: mrr_at_100 value: 39.873337606890374 - type: mrr_at_1000 value: 39.915366095710894 - type: mrr_at_20 value: 39.61445401700697 - type: mrr_at_3 value: 36.438643864386435 - type: mrr_at_5 value: 38.06705670567057 - type: nauc_map_at_1000_diff1 value: 38.544927343322755 - type: nauc_map_at_1000_max value: 31.238008214807472 - type: nauc_map_at_1000_std value: 17.340710731800087 - type: nauc_map_at_100_diff1 value: 38.53271731505664 - type: nauc_map_at_100_max value: 31.24444002509582 - type: nauc_map_at_100_std value: 17.336497143108794 - type: nauc_map_at_10_diff1 value: 38.47149376137689 - type: nauc_map_at_10_max value: 31.057943306774234 - type: nauc_map_at_10_std value: 16.88320995134788 - type: nauc_map_at_1_diff1 value: 42.64117606936473 - type: nauc_map_at_1_max value: 25.235300368623836 - type: nauc_map_at_1_std value: 11.099092252838478 - type: nauc_map_at_20_diff1 value: 38.53909982615329 - type: nauc_map_at_20_max value: 31.25656920271077 - type: nauc_map_at_20_std value: 17.27576321281346 - type: nauc_map_at_3_diff1 value: 38.70482758910707 - type: nauc_map_at_3_max value: 29.343329165290143 - type: nauc_map_at_3_std value: 14.573356497682216 - type: nauc_map_at_5_diff1 value: 38.45317210617443 - type: nauc_map_at_5_max value: 30.410656735518977 - type: nauc_map_at_5_std value: 16.04759185540847 - type: nauc_mrr_at_1000_diff1 value: 39.53064995242943 - type: nauc_mrr_at_1000_max value: 32.077343336360904 - type: nauc_mrr_at_1000_std value: 17.039796809778124 - type: nauc_mrr_at_100_diff1 value: 39.51767279358483 - type: nauc_mrr_at_100_max value: 32.09214820002331 - type: nauc_mrr_at_100_std value: 17.04820908126493 - type: nauc_mrr_at_10_diff1 value: 39.473737530652535 - type: nauc_mrr_at_10_max value: 31.95485973713632 - type: nauc_mrr_at_10_std value: 16.682344196814157 - type: nauc_mrr_at_1_diff1 value: 43.54209193151506 - type: nauc_mrr_at_1_max value: 26.225221030688388 - type: nauc_mrr_at_1_std value: 11.045465765362609 - type: nauc_mrr_at_20_diff1 value: 39.51856222703832 - type: nauc_mrr_at_20_max value: 32.15641740823006 - type: nauc_mrr_at_20_std value: 17.052323554211206 - type: nauc_mrr_at_3_diff1 value: 39.59110286581465 - type: nauc_mrr_at_3_max value: 30.27336556208048 - type: nauc_mrr_at_3_std value: 14.400464395275131 - type: nauc_mrr_at_5_diff1 value: 39.37475030822441 - type: nauc_mrr_at_5_max value: 31.280543943287686 - type: nauc_mrr_at_5_std value: 15.893234066653813 - type: nauc_ndcg_at_1000_diff1 value: 37.92867727075456 - type: nauc_ndcg_at_1000_max value: 34.646736289585874 - type: nauc_ndcg_at_1000_std value: 22.109283863938227 - type: nauc_ndcg_at_100_diff1 value: 37.598490502476295 - type: nauc_ndcg_at_100_max value: 34.99139335810959 - type: nauc_ndcg_at_100_std value: 22.209322757781297 - type: nauc_ndcg_at_10_diff1 value: 37.46143110942927 - type: nauc_ndcg_at_10_max value: 34.384238800709674 - type: nauc_ndcg_at_10_std value: 20.383762100255804 - type: nauc_ndcg_at_1_diff1 value: 43.54209193151506 - type: nauc_ndcg_at_1_max value: 26.225221030688388 - type: nauc_ndcg_at_1_std value: 11.045465765362609 - type: nauc_ndcg_at_20_diff1 value: 37.629804811157065 - type: nauc_ndcg_at_20_max value: 35.13056902686399 - type: nauc_ndcg_at_20_std value: 21.8348192764161 - type: nauc_ndcg_at_3_diff1 value: 37.80901242867197 - type: nauc_ndcg_at_3_max value: 30.93143006645779 - type: nauc_ndcg_at_3_std value: 15.69317657679832 - type: nauc_ndcg_at_5_diff1 value: 37.36938303664242 - type: nauc_ndcg_at_5_max value: 32.78308787409404 - type: nauc_ndcg_at_5_std value: 18.37346652477358 - type: nauc_precision_at_1000_diff1 value: 16.527943940474014 - type: nauc_precision_at_1000_max value: 36.661745199874645 - type: nauc_precision_at_1000_std value: 40.759235242152194 - type: nauc_precision_at_100_diff1 value: 24.54407163693533 - type: nauc_precision_at_100_max value: 45.60883532038694 - type: nauc_precision_at_100_std value: 42.91332524207595 - type: nauc_precision_at_10_diff1 value: 31.72535525745312 - type: nauc_precision_at_10_max value: 45.349381448046586 - type: nauc_precision_at_10_std value: 32.974173826381055 - type: nauc_precision_at_1_diff1 value: 43.54209193151506 - type: nauc_precision_at_1_max value: 26.225221030688388 - type: nauc_precision_at_1_std value: 11.045465765362609 - type: nauc_precision_at_20_diff1 value: 30.374414778965203 - type: nauc_precision_at_20_max value: 48.38235848820685 - type: nauc_precision_at_20_std value: 39.73784986913937 - type: nauc_precision_at_3_diff1 value: 34.95600573048938 - type: nauc_precision_at_3_max value: 35.80713704633644 - type: nauc_precision_at_3_std value: 19.412990960503265 - type: nauc_precision_at_5_diff1 value: 33.12845327196843 - type: nauc_precision_at_5_max value: 40.27118733247272 - type: nauc_precision_at_5_std value: 25.943558127871956 - type: nauc_recall_at_1000_diff1 value: 29.768973516222186 - type: nauc_recall_at_1000_max value: 45.76807061498038 - type: nauc_recall_at_1000_std value: 51.113086504789116 - type: nauc_recall_at_100_diff1 value: 29.22558023832464 - type: nauc_recall_at_100_max value: 46.15003117187778 - type: nauc_recall_at_100_std value: 42.80642812167553 - type: nauc_recall_at_10_diff1 value: 31.629369166217437 - type: nauc_recall_at_10_max value: 42.076418890799836 - type: nauc_recall_at_10_std value: 30.59336185155107 - type: nauc_recall_at_1_diff1 value: 42.64117606936473 - type: nauc_recall_at_1_max value: 25.235300368623836 - type: nauc_recall_at_1_std value: 11.099092252838478 - type: nauc_recall_at_20_diff1 value: 31.335469263874455 - type: nauc_recall_at_20_max value: 45.181873226895455 - type: nauc_recall_at_20_std value: 37.03396315996297 - type: nauc_recall_at_3_diff1 value: 33.30861190274494 - type: nauc_recall_at_3_max value: 33.17199752470918 - type: nauc_recall_at_3_std value: 18.300476180248737 - type: nauc_recall_at_5_diff1 value: 31.99541739955995 - type: nauc_recall_at_5_max value: 37.327841772897685 - type: nauc_recall_at_5_std value: 24.427296500445365 - type: ndcg_at_1 value: 28.503 - type: ndcg_at_10 value: 42.311 - type: ndcg_at_100 value: 45.739999999999995 - type: ndcg_at_1000 value: 47.116 - type: ndcg_at_20 value: 43.814 - type: ndcg_at_3 value: 36.844 - type: ndcg_at_5 value: 39.696999999999996 - type: precision_at_1 value: 28.503 - type: precision_at_10 value: 6.367000000000001 - type: precision_at_100 value: 0.822 - type: precision_at_1000 value: 0.096 - type: precision_at_20 value: 3.5159999999999996 - type: precision_at_3 value: 15.697 - type: precision_at_5 value: 10.975 - type: recall_at_1 value: 26.230999999999998 - type: recall_at_10 value: 57.830000000000005 - type: recall_at_100 value: 73.529 - type: recall_at_1000 value: 84.21 - type: recall_at_20 value: 63.580000000000005 - type: recall_at_3 value: 43.068 - type: recall_at_5 value: 49.99 task: type: Retrieval - dataset: config: default name: MTEB FEVER (default) revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 split: test type: mteb/fever metrics: - type: main_score value: 42.563 - type: map_at_1 value: 25.789 - type: map_at_10 value: 36.523 - type: map_at_100 value: 37.347 - type: map_at_1000 value: 37.393 - type: map_at_20 value: 37.027 - type: map_at_3 value: 33.483000000000004 - type: map_at_5 value: 35.412 - type: mrr_at_1 value: 27.752775277527753 - type: mrr_at_10 value: 38.92034441539394 - type: mrr_at_100 value: 39.701308768272966 - type: mrr_at_1000 value: 39.74002237607331 - type: mrr_at_20 value: 39.40306190517976 - type: mrr_at_3 value: 35.848584858485836 - type: mrr_at_5 value: 37.803280328032784 - type: nauc_map_at_1000_diff1 value: 34.17597190862309 - type: nauc_map_at_1000_max value: 26.9900559130689 - type: nauc_map_at_1000_std value: 12.695854413524419 - type: nauc_map_at_100_diff1 value: 34.159140742195376 - type: nauc_map_at_100_max value: 26.991827854535916 - type: nauc_map_at_100_std value: 12.700812019288938 - type: nauc_map_at_10_diff1 value: 34.08880322400212 - type: nauc_map_at_10_max value: 26.686385238773536 - type: nauc_map_at_10_std value: 12.106693100422692 - type: nauc_map_at_1_diff1 value: 39.75327570681995 - type: nauc_map_at_1_max value: 22.516865174086742 - type: nauc_map_at_1_std value: 7.352429770604897 - type: nauc_map_at_20_diff1 value: 34.09956129529087 - type: nauc_map_at_20_max value: 26.88383408046557 - type: nauc_map_at_20_std value: 12.522543335179536 - type: nauc_map_at_3_diff1 value: 34.4408475586939 - type: nauc_map_at_3_max value: 25.32901294015423 - type: nauc_map_at_3_std value: 10.261012213574276 - type: nauc_map_at_5_diff1 value: 33.839452205385015 - type: nauc_map_at_5_max value: 26.162139149396758 - type: nauc_map_at_5_std value: 11.319738955299071 - type: nauc_mrr_at_1000_diff1 value: 35.35911412311634 - type: nauc_mrr_at_1000_max value: 27.663425924474506 - type: nauc_mrr_at_1000_std value: 12.242914517959687 - type: nauc_mrr_at_100_diff1 value: 35.34838926395416 - type: nauc_mrr_at_100_max value: 27.67860094508867 - type: nauc_mrr_at_100_std value: 12.261677171728074 - type: nauc_mrr_at_10_diff1 value: 35.27102485638653 - type: nauc_mrr_at_10_max value: 27.477689977850154 - type: nauc_mrr_at_10_std value: 11.80499800891837 - type: nauc_mrr_at_1_diff1 value: 40.97638577245565 - type: nauc_mrr_at_1_max value: 23.585524680635643 - type: nauc_mrr_at_1_std value: 7.242935375380964 - type: nauc_mrr_at_20_diff1 value: 35.29506971612031 - type: nauc_mrr_at_20_max value: 27.627615743060264 - type: nauc_mrr_at_20_std value: 12.159260194079204 - type: nauc_mrr_at_3_diff1 value: 35.51192948129344 - type: nauc_mrr_at_3_max value: 26.1168315974585 - type: nauc_mrr_at_3_std value: 9.973253779504573 - type: nauc_mrr_at_5_diff1 value: 35.00372193705226 - type: nauc_mrr_at_5_max value: 26.927656142079055 - type: nauc_mrr_at_5_std value: 11.02304870858424 - type: nauc_ndcg_at_1000_diff1 value: 33.49991969615717 - type: nauc_ndcg_at_1000_max value: 30.209587350888466 - type: nauc_ndcg_at_1000_std value: 17.3604013033284 - type: nauc_ndcg_at_100_diff1 value: 33.09545326289088 - type: nauc_ndcg_at_100_max value: 30.41263075949266 - type: nauc_ndcg_at_100_std value: 17.74146555544644 - type: nauc_ndcg_at_10_diff1 value: 32.68974571152804 - type: nauc_ndcg_at_10_max value: 29.01307655615002 - type: nauc_ndcg_at_10_std value: 14.992165745891109 - type: nauc_ndcg_at_1_diff1 value: 40.97638577245565 - type: nauc_ndcg_at_1_max value: 23.585524680635643 - type: nauc_ndcg_at_1_std value: 7.242935375380964 - type: nauc_ndcg_at_20_diff1 value: 32.70131432794314 - type: nauc_ndcg_at_20_max value: 29.726619193431663 - type: nauc_ndcg_at_20_std value: 16.531701001153703 - type: nauc_ndcg_at_3_diff1 value: 33.2569305109641 - type: nauc_ndcg_at_3_max value: 26.29321481604789 - type: nauc_ndcg_at_3_std value: 11.081977028754668 - type: nauc_ndcg_at_5_diff1 value: 32.15552189165288 - type: nauc_ndcg_at_5_max value: 27.759460485679003 - type: nauc_ndcg_at_5_std value: 13.015694866625852 - type: nauc_precision_at_1000_diff1 value: 16.812073179287424 - type: nauc_precision_at_1000_max value: 37.540985211601225 - type: nauc_precision_at_1000_std value: 36.35910579672834 - type: nauc_precision_at_100_diff1 value: 21.3347768230779 - type: nauc_precision_at_100_max value: 44.04414745027497 - type: nauc_precision_at_100_std value: 41.39009594338739 - type: nauc_precision_at_10_diff1 value: 26.47938109887204 - type: nauc_precision_at_10_max value: 37.19520608029416 - type: nauc_precision_at_10_std value: 24.75031842784357 - type: nauc_precision_at_1_diff1 value: 40.97638577245565 - type: nauc_precision_at_1_max value: 23.585524680635643 - type: nauc_precision_at_1_std value: 7.242935375380964 - type: nauc_precision_at_20_diff1 value: 24.4933069920691 - type: nauc_precision_at_20_max value: 40.181333968312195 - type: nauc_precision_at_20_std value: 31.639505760980857 - type: nauc_precision_at_3_diff1 value: 29.55648660225873 - type: nauc_precision_at_3_max value: 29.42746113719643 - type: nauc_precision_at_3_std value: 13.134772598063801 - type: nauc_precision_at_5_diff1 value: 26.323829221173785 - type: nauc_precision_at_5_max value: 32.98308960955934 - type: nauc_precision_at_5_std value: 17.999660418650226 - type: nauc_recall_at_1000_diff1 value: 27.09758394920151 - type: nauc_recall_at_1000_max value: 49.014623321157934 - type: nauc_recall_at_1000_std value: 54.70002479259918 - type: nauc_recall_at_100_diff1 value: 24.828096194405543 - type: nauc_recall_at_100_max value: 44.57384134801808 - type: nauc_recall_at_100_std value: 45.47648726921974 - type: nauc_recall_at_10_diff1 value: 25.91218654921606 - type: nauc_recall_at_10_max value: 34.13630206088126 - type: nauc_recall_at_10_std value: 24.013987861347292 - type: nauc_recall_at_1_diff1 value: 39.75327570681995 - type: nauc_recall_at_1_max value: 22.516865174086742 - type: nauc_recall_at_1_std value: 7.352429770604897 - type: nauc_recall_at_20_diff1 value: 24.96068254117673 - type: nauc_recall_at_20_max value: 37.43760229546025 - type: nauc_recall_at_20_std value: 31.477306209504473 - type: nauc_recall_at_3_diff1 value: 28.060178345037656 - type: nauc_recall_at_3_max value: 27.364528772057188 - type: nauc_recall_at_3_std value: 13.396166074057525 - type: nauc_recall_at_5_diff1 value: 25.015826317297275 - type: nauc_recall_at_5_max value: 30.444068792645858 - type: nauc_recall_at_5_std value: 17.667570824756893 - type: ndcg_at_1 value: 27.753 - type: ndcg_at_10 value: 42.563 - type: ndcg_at_100 value: 46.572 - type: ndcg_at_1000 value: 47.825 - type: ndcg_at_20 value: 44.335 - type: ndcg_at_3 value: 36.54 - type: ndcg_at_5 value: 39.945 - type: precision_at_1 value: 27.753 - type: precision_at_10 value: 6.460000000000001 - type: precision_at_100 value: 0.859 - type: precision_at_1000 value: 0.098 - type: precision_at_20 value: 3.615 - type: precision_at_3 value: 15.542 - type: precision_at_5 value: 11.134 - type: recall_at_1 value: 25.789 - type: recall_at_10 value: 59.022 - type: recall_at_100 value: 77.49000000000001 - type: recall_at_1000 value: 87.075 - type: recall_at_20 value: 65.813 - type: recall_at_3 value: 42.931000000000004 - type: recall_at_5 value: 51.148 task: type: Retrieval - dataset: config: default name: MTEB FEVER (default) revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 split: train type: mteb/fever metrics: - type: main_score value: 39.544000000000004 - type: map_at_1 value: 22.122 - type: map_at_10 value: 33.021 - type: map_at_100 value: 33.919 - type: map_at_1000 value: 33.974 - type: map_at_20 value: 33.57 - type: map_at_3 value: 29.823 - type: map_at_5 value: 31.698999999999998 - type: mrr_at_1 value: 24.632547126855478 - type: mrr_at_10 value: 36.27443108803912 - type: mrr_at_100 value: 37.116518613084615 - type: mrr_at_1000 value: 37.16035774805865 - type: mrr_at_20 value: 36.80159654321279 - type: mrr_at_3 value: 32.990923716721404 - type: mrr_at_5 value: 34.94070060407349 - type: nauc_map_at_1000_diff1 value: 27.10508470711112 - type: nauc_map_at_1000_max value: 19.122369111175676 - type: nauc_map_at_1000_std value: 7.556987978069723 - type: nauc_map_at_100_diff1 value: 27.093923762733823 - type: nauc_map_at_100_max value: 19.125020548640016 - type: nauc_map_at_100_std value: 7.557312925203464 - type: nauc_map_at_10_diff1 value: 26.99461859425635 - type: nauc_map_at_10_max value: 18.810433489022632 - type: nauc_map_at_10_std value: 7.03658830181476 - type: nauc_map_at_1_diff1 value: 29.795588990770582 - type: nauc_map_at_1_max value: 13.964929294547716 - type: nauc_map_at_1_std value: 2.014683211424629 - type: nauc_map_at_20_diff1 value: 27.04355894870037 - type: nauc_map_at_20_max value: 19.053125107329684 - type: nauc_map_at_20_std value: 7.403819685723809 - type: nauc_map_at_3_diff1 value: 27.04176214403396 - type: nauc_map_at_3_max value: 17.09452873117263 - type: nauc_map_at_3_std value: 4.930539918889367 - type: nauc_map_at_5_diff1 value: 26.94702978336792 - type: nauc_map_at_5_max value: 18.11738254686368 - type: nauc_map_at_5_std value: 6.07806250068151 - type: nauc_mrr_at_1000_diff1 value: 28.115687821599565 - type: nauc_mrr_at_1000_max value: 20.022657307983696 - type: nauc_mrr_at_1000_std value: 6.049841455492619 - type: nauc_mrr_at_100_diff1 value: 28.10698592392563 - type: nauc_mrr_at_100_max value: 20.039825659015783 - type: nauc_mrr_at_100_std value: 6.067374534430724 - type: nauc_mrr_at_10_diff1 value: 28.01268183804612 - type: nauc_mrr_at_10_max value: 19.866652130070182 - type: nauc_mrr_at_10_std value: 5.7238201723873265 - type: nauc_mrr_at_1_diff1 value: 31.03974019855326 - type: nauc_mrr_at_1_max value: 14.816836320215055 - type: nauc_mrr_at_1_std value: 0.9833928793907583 - type: nauc_mrr_at_20_diff1 value: 28.0597179017192 - type: nauc_mrr_at_20_max value: 20.03479616925749 - type: nauc_mrr_at_20_std value: 6.003188285805082 - type: nauc_mrr_at_3_diff1 value: 28.102700297200357 - type: nauc_mrr_at_3_max value: 18.21123845169275 - type: nauc_mrr_at_3_std value: 3.76508019698659 - type: nauc_mrr_at_5_diff1 value: 27.995019911180925 - type: nauc_mrr_at_5_max value: 19.247724901440165 - type: nauc_mrr_at_5_std value: 4.867678189338567 - type: nauc_ndcg_at_1000_diff1 value: 27.001039847228327 - type: nauc_ndcg_at_1000_max value: 22.747905636602912 - type: nauc_ndcg_at_1000_std value: 12.131044834687508 - type: nauc_ndcg_at_100_diff1 value: 26.76465860993926 - type: nauc_ndcg_at_100_max value: 22.999445448847684 - type: nauc_ndcg_at_100_std value: 12.371530472512905 - type: nauc_ndcg_at_10_diff1 value: 26.271134051292965 - type: nauc_ndcg_at_10_max value: 21.708152120634573 - type: nauc_ndcg_at_10_std value: 9.956366430157477 - type: nauc_ndcg_at_1_diff1 value: 31.03974019855326 - type: nauc_ndcg_at_1_max value: 14.816836320215055 - type: nauc_ndcg_at_1_std value: 0.9833928793907583 - type: nauc_ndcg_at_20_diff1 value: 26.418892680753135 - type: nauc_ndcg_at_20_max value: 22.57247401098405 - type: nauc_ndcg_at_20_std value: 11.318053940484274 - type: nauc_ndcg_at_3_diff1 value: 26.48774759445648 - type: nauc_ndcg_at_3_max value: 18.362023388227982 - type: nauc_ndcg_at_3_std value: 5.5666629539029095 - type: nauc_ndcg_at_5_diff1 value: 26.231548007732275 - type: nauc_ndcg_at_5_max value: 20.108141595384858 - type: nauc_ndcg_at_5_std value: 7.6451734794663775 - type: nauc_precision_at_1000_diff1 value: 10.312275248214805 - type: nauc_precision_at_1000_max value: 29.125490605573674 - type: nauc_precision_at_1000_std value: 27.980820440423077 - type: nauc_precision_at_100_diff1 value: 17.082042984133537 - type: nauc_precision_at_100_max value: 35.99017212218388 - type: nauc_precision_at_100_std value: 31.93133992782479 - type: nauc_precision_at_10_diff1 value: 22.490628336580023 - type: nauc_precision_at_10_max value: 31.505838103581745 - type: nauc_precision_at_10_std value: 19.04756851410053 - type: nauc_precision_at_1_diff1 value: 31.03974019855326 - type: nauc_precision_at_1_max value: 14.816836320215055 - type: nauc_precision_at_1_std value: 0.9833928793907583 - type: nauc_precision_at_20_diff1 value: 20.899414285866147 - type: nauc_precision_at_20_max value: 34.88025857085343 - type: nauc_precision_at_20_std value: 24.931946815895103 - type: nauc_precision_at_3_diff1 value: 24.7584008425105 - type: nauc_precision_at_3_max value: 22.05117420112601 - type: nauc_precision_at_3_std value: 7.0182363705564725 - type: nauc_precision_at_5_diff1 value: 23.70576059633269 - type: nauc_precision_at_5_max value: 26.41538125990869 - type: nauc_precision_at_5_std value: 11.811642655248509 - type: nauc_recall_at_1000_diff1 value: 22.666331359570606 - type: nauc_recall_at_1000_max value: 42.022586063014 - type: nauc_recall_at_1000_std value: 51.28078945425114 - type: nauc_recall_at_100_diff1 value: 22.038970758345137 - type: nauc_recall_at_100_max value: 37.50016128544992 - type: nauc_recall_at_100_std value: 38.42258858995013 - type: nauc_recall_at_10_diff1 value: 21.83627962615356 - type: nauc_recall_at_10_max value: 28.17929505850154 - type: nauc_recall_at_10_std value: 19.711584731579144 - type: nauc_recall_at_1_diff1 value: 29.795588990770582 - type: nauc_recall_at_1_max value: 13.964929294547716 - type: nauc_recall_at_1_std value: 2.014683211424629 - type: nauc_recall_at_20_diff1 value: 21.612730102448293 - type: nauc_recall_at_20_max value: 31.944811405726607 - type: nauc_recall_at_20_std value: 26.127216044442598 - type: nauc_recall_at_3_diff1 value: 23.365847069042577 - type: nauc_recall_at_3_max value: 19.98758181260918 - type: nauc_recall_at_3_std value: 8.28613301698417 - type: nauc_recall_at_5_diff1 value: 22.39829927032666 - type: nauc_recall_at_5_max value: 23.59853172030037 - type: nauc_recall_at_5_std value: 12.723055723433257 - type: ndcg_at_1 value: 24.633 - type: ndcg_at_10 value: 39.544000000000004 - type: ndcg_at_100 value: 43.85 - type: ndcg_at_1000 value: 45.283 - type: ndcg_at_20 value: 41.471999999999994 - type: ndcg_at_3 value: 33.168 - type: ndcg_at_5 value: 36.437000000000005 - type: precision_at_1 value: 24.633 - type: precision_at_10 value: 6.429 - type: precision_at_100 value: 0.881 - type: precision_at_1000 value: 0.10200000000000001 - type: precision_at_20 value: 3.649 - type: precision_at_3 value: 14.796999999999999 - type: precision_at_5 value: 10.695 - type: recall_at_1 value: 22.122 - type: recall_at_10 value: 56.74 - type: recall_at_100 value: 76.285 - type: recall_at_1000 value: 87.13300000000001 - type: recall_at_20 value: 64.067 - type: recall_at_3 value: 39.446 - type: recall_at_5 value: 47.379 task: type: Retrieval - dataset: config: default name: MTEB FiQA2018 (default) revision: 27a168819829fe9bcd655c2df245fb19452e8e06 split: dev type: mteb/fiqa metrics: - type: main_score value: 43.452 - type: map_at_1 value: 22.435 - type: map_at_10 value: 35.832 - type: map_at_100 value: 37.632 - type: map_at_1000 value: 37.799 - type: map_at_20 value: 36.8 - type: map_at_3 value: 31.689 - type: map_at_5 value: 34.266999999999996 - type: mrr_at_1 value: 41.4 - type: mrr_at_10 value: 50.4342857142857 - type: mrr_at_100 value: 51.21821035615517 - type: mrr_at_1000 value: 51.266442546162516 - type: mrr_at_20 value: 50.91712414176811 - type: mrr_at_3 value: 48.43333333333335 - type: mrr_at_5 value: 49.64333333333333 - type: nauc_map_at_1000_diff1 value: 42.153292679851276 - type: nauc_map_at_1000_max value: 29.601619670272207 - type: nauc_map_at_1000_std value: -0.7383965062341508 - type: nauc_map_at_100_diff1 value: 42.07556710921584 - type: nauc_map_at_100_max value: 29.464776621441306 - type: nauc_map_at_100_std value: -0.7388582229371314 - type: nauc_map_at_10_diff1 value: 42.67413345132165 - type: nauc_map_at_10_max value: 27.785600435680596 - type: nauc_map_at_10_std value: -2.389554217354724 - type: nauc_map_at_1_diff1 value: 52.10965701450313 - type: nauc_map_at_1_max value: 21.771380078575188 - type: nauc_map_at_1_std value: -10.678726101360372 - type: nauc_map_at_20_diff1 value: 42.03673683254875 - type: nauc_map_at_20_max value: 28.619853272850836 - type: nauc_map_at_20_std value: -1.352968012843277 - type: nauc_map_at_3_diff1 value: 45.02940789042909 - type: nauc_map_at_3_max value: 25.73172449783013 - type: nauc_map_at_3_std value: -5.405882526355802 - type: nauc_map_at_5_diff1 value: 43.70377705353985 - type: nauc_map_at_5_max value: 27.06264350949003 - type: nauc_map_at_5_std value: -3.4566488552159638 - type: nauc_mrr_at_1000_diff1 value: 46.960388982124144 - type: nauc_mrr_at_1000_max value: 38.551457905980335 - type: nauc_mrr_at_1000_std value: -2.7251142586180452 - type: nauc_mrr_at_100_diff1 value: 46.95671307176507 - type: nauc_mrr_at_100_max value: 38.557512138487596 - type: nauc_mrr_at_100_std value: -2.670703447776175 - type: nauc_mrr_at_10_diff1 value: 46.88159177559446 - type: nauc_mrr_at_10_max value: 38.274948278122025 - type: nauc_mrr_at_10_std value: -2.7583668412620073 - type: nauc_mrr_at_1_diff1 value: 53.27259228216133 - type: nauc_mrr_at_1_max value: 39.950693167149126 - type: nauc_mrr_at_1_std value: -7.918584226469945 - type: nauc_mrr_at_20_diff1 value: 46.81459261048091 - type: nauc_mrr_at_20_max value: 38.41539751417714 - type: nauc_mrr_at_20_std value: -2.822290535243399 - type: nauc_mrr_at_3_diff1 value: 47.02719824632526 - type: nauc_mrr_at_3_max value: 38.18543976624867 - type: nauc_mrr_at_3_std value: -4.303513153685435 - type: nauc_mrr_at_5_diff1 value: 47.30493874120552 - type: nauc_mrr_at_5_max value: 38.74873358201773 - type: nauc_mrr_at_5_std value: -3.018961404554449 - type: nauc_ndcg_at_1000_diff1 value: 41.34144224154281 - type: nauc_ndcg_at_1000_max value: 34.69321113499929 - type: nauc_ndcg_at_1000_std value: 3.219617986178771 - type: nauc_ndcg_at_100_diff1 value: 40.527324056051725 - type: nauc_ndcg_at_100_max value: 33.35505365876672 - type: nauc_ndcg_at_100_std value: 4.298571598887196 - type: nauc_ndcg_at_10_diff1 value: 40.596904721915195 - type: nauc_ndcg_at_10_max value: 29.70053338455224 - type: nauc_ndcg_at_10_std value: 0.6266822992314294 - type: nauc_ndcg_at_1_diff1 value: 53.27259228216133 - type: nauc_ndcg_at_1_max value: 39.950693167149126 - type: nauc_ndcg_at_1_std value: -7.918584226469945 - type: nauc_ndcg_at_20_diff1 value: 39.5053199292571 - type: nauc_ndcg_at_20_max value: 30.481412422904842 - type: nauc_ndcg_at_20_std value: 1.8476982628200813 - type: nauc_ndcg_at_3_diff1 value: 41.230751065511214 - type: nauc_ndcg_at_3_max value: 32.91537830913592 - type: nauc_ndcg_at_3_std value: -1.3527356468708593 - type: nauc_ndcg_at_5_diff1 value: 41.73919141452553 - type: nauc_ndcg_at_5_max value: 31.475892998444465 - type: nauc_ndcg_at_5_std value: -0.07736285589192196 - type: nauc_precision_at_1000_diff1 value: -4.581230675366823 - type: nauc_precision_at_1000_max value: 35.28244987228938 - type: nauc_precision_at_1000_std value: 19.848724961997043 - type: nauc_precision_at_100_diff1 value: -1.0470568797091013 - type: nauc_precision_at_100_max value: 38.1171253236869 - type: nauc_precision_at_100_std value: 25.36360406306609 - type: nauc_precision_at_10_diff1 value: 9.535027734151406 - type: nauc_precision_at_10_max value: 35.156066985665404 - type: nauc_precision_at_10_std value: 14.719266832480818 - type: nauc_precision_at_1_diff1 value: 53.27259228216133 - type: nauc_precision_at_1_max value: 39.950693167149126 - type: nauc_precision_at_1_std value: -7.918584226469945 - type: nauc_precision_at_20_diff1 value: 3.192256990840671 - type: nauc_precision_at_20_max value: 36.28479694909197 - type: nauc_precision_at_20_std value: 20.43233178984753 - type: nauc_precision_at_3_diff1 value: 23.206183980981294 - type: nauc_precision_at_3_max value: 35.68270813316575 - type: nauc_precision_at_3_std value: 5.763084791714441 - type: nauc_precision_at_5_diff1 value: 16.311253397261815 - type: nauc_precision_at_5_max value: 35.362299389447415 - type: nauc_precision_at_5_std value: 11.070937256666982 - type: nauc_recall_at_1000_diff1 value: 13.246657177207306 - type: nauc_recall_at_1000_max value: 34.11290480894018 - type: nauc_recall_at_1000_std value: 28.2944964848988 - type: nauc_recall_at_100_diff1 value: 22.167289428754383 - type: nauc_recall_at_100_max value: 24.776741594012847 - type: nauc_recall_at_100_std value: 22.59200251836463 - type: nauc_recall_at_10_diff1 value: 28.175864072418843 - type: nauc_recall_at_10_max value: 17.931411205973205 - type: nauc_recall_at_10_std value: 6.352643320621673 - type: nauc_recall_at_1_diff1 value: 52.10965701450313 - type: nauc_recall_at_1_max value: 21.771380078575188 - type: nauc_recall_at_1_std value: -10.678726101360372 - type: nauc_recall_at_20_diff1 value: 22.886480231407088 - type: nauc_recall_at_20_max value: 18.617961694923725 - type: nauc_recall_at_20_std value: 9.027816465740859 - type: nauc_recall_at_3_diff1 value: 35.80141995616683 - type: nauc_recall_at_3_max value: 20.28065350985076 - type: nauc_recall_at_3_std value: -2.0345418109694915 - type: nauc_recall_at_5_diff1 value: 33.96384875377924 - type: nauc_recall_at_5_max value: 20.532436267734845 - type: nauc_recall_at_5_std value: 1.626798085762849 - type: ndcg_at_1 value: 41.4 - type: ndcg_at_10 value: 43.452 - type: ndcg_at_100 value: 49.655 - type: ndcg_at_1000 value: 52.596 - type: ndcg_at_20 value: 45.769999999999996 - type: ndcg_at_3 value: 40.157 - type: ndcg_at_5 value: 41.481 - type: precision_at_1 value: 41.4 - type: precision_at_10 value: 11.4 - type: precision_at_100 value: 1.7919999999999998 - type: precision_at_1000 value: 0.22699999999999998 - type: precision_at_20 value: 6.77 - type: precision_at_3 value: 25.733 - type: precision_at_5 value: 19.16 - type: recall_at_1 value: 22.435 - type: recall_at_10 value: 50.638000000000005 - type: recall_at_100 value: 73.819 - type: recall_at_1000 value: 92.134 - type: recall_at_20 value: 57.618 - type: recall_at_3 value: 37.222 - type: recall_at_5 value: 43.602000000000004 task: type: Retrieval - dataset: config: default name: MTEB FiQA2018 (default) revision: 27a168819829fe9bcd655c2df245fb19452e8e06 split: test type: mteb/fiqa metrics: - type: main_score value: 40.965 - type: map_at_1 value: 19.926 - type: map_at_10 value: 32.867000000000004 - type: map_at_100 value: 34.97 - type: map_at_1000 value: 35.158 - type: map_at_20 value: 34.067 - type: map_at_3 value: 28.854000000000003 - type: map_at_5 value: 30.897999999999996 - type: mrr_at_1 value: 39.660493827160494 - type: mrr_at_10 value: 49.50402949245541 - type: mrr_at_100 value: 50.29938288169482 - type: mrr_at_1000 value: 50.33496267350495 - type: mrr_at_20 value: 49.94865952256876 - type: mrr_at_3 value: 46.939300411522616 - type: mrr_at_5 value: 48.39763374485595 - type: nauc_map_at_1000_diff1 value: 26.494855045027233 - type: nauc_map_at_1000_max value: 27.303620903381308 - type: nauc_map_at_1000_std value: -1.4561552032645217 - type: nauc_map_at_100_diff1 value: 26.508482155009656 - type: nauc_map_at_100_max value: 27.197476666972175 - type: nauc_map_at_100_std value: -1.4452769452596321 - type: nauc_map_at_10_diff1 value: 26.374957272593257 - type: nauc_map_at_10_max value: 25.630002139583425 - type: nauc_map_at_10_std value: -2.7457793654796303 - type: nauc_map_at_1_diff1 value: 32.633041679053 - type: nauc_map_at_1_max value: 18.95078398066585 - type: nauc_map_at_1_std value: -5.289494036677301 - type: nauc_map_at_20_diff1 value: 26.420832066149863 - type: nauc_map_at_20_max value: 26.359849203621906 - type: nauc_map_at_20_std value: -2.1819017372318896 - type: nauc_map_at_3_diff1 value: 27.926274507465116 - type: nauc_map_at_3_max value: 23.05501535710793 - type: nauc_map_at_3_std value: -4.325658334680725 - type: nauc_map_at_5_diff1 value: 27.269028553122237 - type: nauc_map_at_5_max value: 24.606451651258975 - type: nauc_map_at_5_std value: -3.8403860613657264 - type: nauc_mrr_at_1000_diff1 value: 32.27607008114233 - type: nauc_mrr_at_1000_max value: 38.46692644890318 - type: nauc_mrr_at_1000_std value: 0.8468212008159507 - type: nauc_mrr_at_100_diff1 value: 32.27189093598166 - type: nauc_mrr_at_100_max value: 38.47818433738234 - type: nauc_mrr_at_100_std value: 0.86792905411205 - type: nauc_mrr_at_10_diff1 value: 32.27220558113546 - type: nauc_mrr_at_10_max value: 38.5768621159052 - type: nauc_mrr_at_10_std value: 0.7700057337792469 - type: nauc_mrr_at_1_diff1 value: 36.11482505826484 - type: nauc_mrr_at_1_max value: 38.21978668740053 - type: nauc_mrr_at_1_std value: 0.32943153196143976 - type: nauc_mrr_at_20_diff1 value: 32.20429693089343 - type: nauc_mrr_at_20_max value: 38.45040837784157 - type: nauc_mrr_at_20_std value: 0.8750958039709447 - type: nauc_mrr_at_3_diff1 value: 32.36086055016881 - type: nauc_mrr_at_3_max value: 37.587053558645614 - type: nauc_mrr_at_3_std value: -0.14274663611290778 - type: nauc_mrr_at_5_diff1 value: 32.34528370475561 - type: nauc_mrr_at_5_max value: 38.39546966320742 - type: nauc_mrr_at_5_std value: 0.03003840440912135 - type: nauc_ndcg_at_1000_diff1 value: 26.691917929958546 - type: nauc_ndcg_at_1000_max value: 32.58961283234771 - type: nauc_ndcg_at_1000_std value: 2.130192967914691 - type: nauc_ndcg_at_100_diff1 value: 26.872799707070357 - type: nauc_ndcg_at_100_max value: 31.503230695989938 - type: nauc_ndcg_at_100_std value: 3.3435217735327365 - type: nauc_ndcg_at_10_diff1 value: 26.22445930619204 - type: nauc_ndcg_at_10_max value: 28.332502188521534 - type: nauc_ndcg_at_10_std value: -0.6029849879688606 - type: nauc_ndcg_at_1_diff1 value: 36.11482505826484 - type: nauc_ndcg_at_1_max value: 38.21978668740053 - type: nauc_ndcg_at_1_std value: 0.32943153196143976 - type: nauc_ndcg_at_20_diff1 value: 26.167580898864678 - type: nauc_ndcg_at_20_max value: 29.138479770125414 - type: nauc_ndcg_at_20_std value: 0.7659081670630379 - type: nauc_ndcg_at_3_diff1 value: 28.517059827975217 - type: nauc_ndcg_at_3_max value: 31.64778875185945 - type: nauc_ndcg_at_3_std value: -2.4273609628572763 - type: nauc_ndcg_at_5_diff1 value: 27.523926620375715 - type: nauc_ndcg_at_5_max value: 29.36575905903845 - type: nauc_ndcg_at_5_std value: -2.786795946093564 - type: nauc_precision_at_1000_diff1 value: -2.197004713716575 - type: nauc_precision_at_1000_max value: 34.162523162834376 - type: nauc_precision_at_1000_std value: 8.986097658449319 - type: nauc_precision_at_100_diff1 value: 3.134123713987622 - type: nauc_precision_at_100_max value: 36.85792724470659 - type: nauc_precision_at_100_std value: 14.919862433706202 - type: nauc_precision_at_10_diff1 value: 9.854013398949677 - type: nauc_precision_at_10_max value: 37.525893321985755 - type: nauc_precision_at_10_std value: 8.696222830370099 - type: nauc_precision_at_1_diff1 value: 36.11482505826484 - type: nauc_precision_at_1_max value: 38.21978668740053 - type: nauc_precision_at_1_std value: 0.32943153196143976 - type: nauc_precision_at_20_diff1 value: 6.858037910006135 - type: nauc_precision_at_20_max value: 37.575334948715025 - type: nauc_precision_at_20_std value: 11.273137972815993 - type: nauc_precision_at_3_diff1 value: 19.692731738653848 - type: nauc_precision_at_3_max value: 37.619642815139876 - type: nauc_precision_at_3_std value: 2.0435791110517734 - type: nauc_precision_at_5_diff1 value: 15.784071911592584 - type: nauc_precision_at_5_max value: 39.05471012980888 - type: nauc_precision_at_5_std value: 4.244303875173872 - type: nauc_recall_at_1000_diff1 value: 8.108025620944437 - type: nauc_recall_at_1000_max value: 15.671603738840128 - type: nauc_recall_at_1000_std value: 18.119568463795165 - type: nauc_recall_at_100_diff1 value: 18.48107834900102 - type: nauc_recall_at_100_max value: 20.209949068924594 - type: nauc_recall_at_100_std value: 19.01259420147078 - type: nauc_recall_at_10_diff1 value: 17.93458808832428 - type: nauc_recall_at_10_max value: 17.81863413759591 - type: nauc_recall_at_10_std value: 0.3549693938691897 - type: nauc_recall_at_1_diff1 value: 32.633041679053 - type: nauc_recall_at_1_max value: 18.95078398066585 - type: nauc_recall_at_1_std value: -5.289494036677301 - type: nauc_recall_at_20_diff1 value: 16.794175854855236 - type: nauc_recall_at_20_max value: 17.227732008238515 - type: nauc_recall_at_20_std value: 4.118397119053081 - type: nauc_recall_at_3_diff1 value: 22.401972357181148 - type: nauc_recall_at_3_max value: 17.63593806963239 - type: nauc_recall_at_3_std value: -4.308586374199029 - type: nauc_recall_at_5_diff1 value: 21.149136573276326 - type: nauc_recall_at_5_max value: 18.586766434816816 - type: nauc_recall_at_5_std value: -4.198379401256694 - type: ndcg_at_1 value: 39.660000000000004 - type: ndcg_at_10 value: 40.965 - type: ndcg_at_100 value: 48.254999999999995 - type: ndcg_at_1000 value: 51.307 - type: ndcg_at_20 value: 43.943 - type: ndcg_at_3 value: 37.379 - type: ndcg_at_5 value: 38.251000000000005 - type: precision_at_1 value: 39.660000000000004 - type: precision_at_10 value: 11.42 - type: precision_at_100 value: 1.8870000000000002 - type: precision_at_1000 value: 0.242 - type: precision_at_20 value: 6.9750000000000005 - type: precision_at_3 value: 25.308999999999997 - type: precision_at_5 value: 18.21 - type: recall_at_1 value: 19.926 - type: recall_at_10 value: 47.951 - type: recall_at_100 value: 74.703 - type: recall_at_1000 value: 93.003 - type: recall_at_20 value: 57.062000000000005 - type: recall_at_3 value: 33.942 - type: recall_at_5 value: 39.39 task: type: Retrieval - dataset: config: default name: MTEB FiQA2018 (default) revision: 27a168819829fe9bcd655c2df245fb19452e8e06 split: train type: mteb/fiqa metrics: - type: main_score value: 40.109 - type: map_at_1 value: 19.428 - type: map_at_10 value: 32.132 - type: map_at_100 value: 34.095 - type: map_at_1000 value: 34.275 - type: map_at_20 value: 33.223 - type: map_at_3 value: 27.744999999999997 - type: map_at_5 value: 30.139 - type: mrr_at_1 value: 37.43636363636364 - type: mrr_at_10 value: 46.699083694083626 - type: mrr_at_100 value: 47.54203896354004 - type: mrr_at_1000 value: 47.58580275375307 - type: mrr_at_20 value: 47.198759673340916 - type: mrr_at_3 value: 44.00606060606058 - type: mrr_at_5 value: 45.532424242424206 - type: nauc_map_at_1000_diff1 value: 37.79487277315329 - type: nauc_map_at_1000_max value: 26.0872634860684 - type: nauc_map_at_1000_std value: -1.0279629281127562 - type: nauc_map_at_100_diff1 value: 37.78657280270208 - type: nauc_map_at_100_max value: 25.97447841536255 - type: nauc_map_at_100_std value: -1.0680554938428704 - type: nauc_map_at_10_diff1 value: 37.75308574166357 - type: nauc_map_at_10_max value: 24.48296458166315 - type: nauc_map_at_10_std value: -2.4706230738202115 - type: nauc_map_at_1_diff1 value: 42.6444957224609 - type: nauc_map_at_1_max value: 17.312823253319056 - type: nauc_map_at_1_std value: -5.949436425909343 - type: nauc_map_at_20_diff1 value: 37.77686951531909 - type: nauc_map_at_20_max value: 25.40897694790217 - type: nauc_map_at_20_std value: -1.6373032823813254 - type: nauc_map_at_3_diff1 value: 38.60509752110032 - type: nauc_map_at_3_max value: 21.348684256478478 - type: nauc_map_at_3_std value: -4.810470520501061 - type: nauc_map_at_5_diff1 value: 38.09219815429182 - type: nauc_map_at_5_max value: 22.835685949895517 - type: nauc_map_at_5_std value: -3.9401553768980317 - type: nauc_mrr_at_1000_diff1 value: 42.085039832710486 - type: nauc_mrr_at_1000_max value: 35.16928700370471 - type: nauc_mrr_at_1000_std value: 0.9299074027021959 - type: nauc_mrr_at_100_diff1 value: 42.07215266936291 - type: nauc_mrr_at_100_max value: 35.17189764710499 - type: nauc_mrr_at_100_std value: 0.9530063786251255 - type: nauc_mrr_at_10_diff1 value: 41.98881249801537 - type: nauc_mrr_at_10_max value: 35.09509256566193 - type: nauc_mrr_at_10_std value: 0.794172569517322 - type: nauc_mrr_at_1_diff1 value: 46.077585624783666 - type: nauc_mrr_at_1_max value: 35.39409660069189 - type: nauc_mrr_at_1_std value: -1.533340930893894 - type: nauc_mrr_at_20_diff1 value: 42.00798938048869 - type: nauc_mrr_at_20_max value: 35.158812601568314 - type: nauc_mrr_at_20_std value: 0.9056333950877309 - type: nauc_mrr_at_3_diff1 value: 42.59361038737236 - type: nauc_mrr_at_3_max value: 34.98645026093173 - type: nauc_mrr_at_3_std value: 0.0023951964897005276 - type: nauc_mrr_at_5_diff1 value: 42.2008327692426 - type: nauc_mrr_at_5_max value: 34.81803036128217 - type: nauc_mrr_at_5_std value: 0.21171705701408164 - type: nauc_ndcg_at_1000_diff1 value: 37.59389251562878 - type: nauc_ndcg_at_1000_max value: 31.01866705444215 - type: nauc_ndcg_at_1000_std value: 3.218055181162254 - type: nauc_ndcg_at_100_diff1 value: 37.12230330729587 - type: nauc_ndcg_at_100_max value: 29.935095243810167 - type: nauc_ndcg_at_100_std value: 3.7200669195773437 - type: nauc_ndcg_at_10_diff1 value: 36.96336573645087 - type: nauc_ndcg_at_10_max value: 27.254442355026736 - type: nauc_ndcg_at_10_std value: 0.14665445172204347 - type: nauc_ndcg_at_1_diff1 value: 46.077585624783666 - type: nauc_ndcg_at_1_max value: 35.39409660069189 - type: nauc_ndcg_at_1_std value: -1.533340930893894 - type: nauc_ndcg_at_20_diff1 value: 37.074571249220114 - type: nauc_ndcg_at_20_max value: 28.399045519694383 - type: nauc_ndcg_at_20_std value: 1.550978185230789 - type: nauc_ndcg_at_3_diff1 value: 37.59928811563115 - type: nauc_ndcg_at_3_max value: 28.982761358417385 - type: nauc_ndcg_at_3_std value: -1.238849396182027 - type: nauc_ndcg_at_5_diff1 value: 37.406291033992446 - type: nauc_ndcg_at_5_max value: 27.126892323381625 - type: nauc_ndcg_at_5_std value: -1.3896938747693226 - type: nauc_precision_at_1000_diff1 value: -1.2563257534530403 - type: nauc_precision_at_1000_max value: 35.797345612886375 - type: nauc_precision_at_1000_std value: 17.41472736776915 - type: nauc_precision_at_100_diff1 value: 4.952971767905249 - type: nauc_precision_at_100_max value: 38.73911955037188 - type: nauc_precision_at_100_std value: 21.066156963949332 - type: nauc_precision_at_10_diff1 value: 15.966764461879817 - type: nauc_precision_at_10_max value: 38.48114461750857 - type: nauc_precision_at_10_std value: 12.21332681895805 - type: nauc_precision_at_1_diff1 value: 46.077585624783666 - type: nauc_precision_at_1_max value: 35.39409660069189 - type: nauc_precision_at_1_std value: -1.533340930893894 - type: nauc_precision_at_20_diff1 value: 12.863561744619767 - type: nauc_precision_at_20_max value: 39.97799707791258 - type: nauc_precision_at_20_std value: 16.479326047022596 - type: nauc_precision_at_3_diff1 value: 26.270777531636746 - type: nauc_precision_at_3_max value: 36.221006349866734 - type: nauc_precision_at_3_std value: 4.132546766200079 - type: nauc_precision_at_5_diff1 value: 21.514798016502567 - type: nauc_precision_at_5_max value: 36.78486009359055 - type: nauc_precision_at_5_std value: 6.550630352857908 - type: nauc_recall_at_1000_diff1 value: 18.296781187173607 - type: nauc_recall_at_1000_max value: 16.345940832080522 - type: nauc_recall_at_1000_std value: 25.876663129650872 - type: nauc_recall_at_100_diff1 value: 22.34531148768501 - type: nauc_recall_at_100_max value: 18.990695799061044 - type: nauc_recall_at_100_std value: 17.324570191171166 - type: nauc_recall_at_10_diff1 value: 26.9638177104112 - type: nauc_recall_at_10_max value: 18.015343527525935 - type: nauc_recall_at_10_std value: 1.9739545303234234 - type: nauc_recall_at_1_diff1 value: 42.6444957224609 - type: nauc_recall_at_1_max value: 17.312823253319056 - type: nauc_recall_at_1_std value: -5.949436425909343 - type: nauc_recall_at_20_diff1 value: 25.89370052543927 - type: nauc_recall_at_20_max value: 19.183543872025176 - type: nauc_recall_at_20_std value: 5.845168418090508 - type: nauc_recall_at_3_diff1 value: 32.42588221746735 - type: nauc_recall_at_3_max value: 16.982760583883568 - type: nauc_recall_at_3_std value: -3.8563202145821722 - type: nauc_recall_at_5_diff1 value: 29.949897843617833 - type: nauc_recall_at_5_max value: 16.763718302170407 - type: nauc_recall_at_5_std value: -2.4876499904473515 - type: ndcg_at_1 value: 37.436 - type: ndcg_at_10 value: 40.109 - type: ndcg_at_100 value: 47.099000000000004 - type: ndcg_at_1000 value: 50.09 - type: ndcg_at_20 value: 42.807 - type: ndcg_at_3 value: 35.614000000000004 - type: ndcg_at_5 value: 37.202 - type: precision_at_1 value: 37.436 - type: precision_at_10 value: 11.14 - type: precision_at_100 value: 1.839 - type: precision_at_1000 value: 0.23500000000000001 - type: precision_at_20 value: 6.7379999999999995 - type: precision_at_3 value: 23.660999999999998 - type: precision_at_5 value: 17.589 - type: recall_at_1 value: 19.428 - type: recall_at_10 value: 48.262 - type: recall_at_100 value: 74.533 - type: recall_at_1000 value: 92.707 - type: recall_at_20 value: 56.684 - type: recall_at_3 value: 32.918 - type: recall_at_5 value: 39.565 task: type: Retrieval - dataset: config: default name: MTEB HotpotQA (default) revision: ab518f4d6fcca38d87c25209f94beba119d02014 split: dev type: mteb/hotpotqa metrics: - type: main_score value: 27.319 - type: map_at_1 value: 15.128 - type: map_at_10 value: 21.121000000000002 - type: map_at_100 value: 21.973000000000003 - type: map_at_1000 value: 22.073999999999998 - type: map_at_20 value: 21.582 - type: map_at_3 value: 19.323999999999998 - type: map_at_5 value: 20.358 - type: mrr_at_1 value: 30.255186341105194 - type: mrr_at_10 value: 36.86408857649899 - type: mrr_at_100 value: 37.61006780296905 - type: mrr_at_1000 value: 37.6719518655528 - type: mrr_at_20 value: 37.281873457624904 - type: mrr_at_3 value: 35.11107031393417 - type: mrr_at_5 value: 36.16669726454924 - type: nauc_map_at_1000_diff1 value: 48.847643794509025 - type: nauc_map_at_1000_max value: 39.770122515103765 - type: nauc_map_at_1000_std value: 22.807391952064208 - type: nauc_map_at_100_diff1 value: 48.84836642359173 - type: nauc_map_at_100_max value: 39.74431147433934 - type: nauc_map_at_100_std value: 22.742541317197745 - type: nauc_map_at_10_diff1 value: 49.4233743176388 - type: nauc_map_at_10_max value: 39.91993995966069 - type: nauc_map_at_10_std value: 21.91665772388842 - type: nauc_map_at_1_diff1 value: 58.0289225809443 - type: nauc_map_at_1_max value: 43.869928323541096 - type: nauc_map_at_1_std value: 18.414910058766996 - type: nauc_map_at_20_diff1 value: 48.946851303177276 - type: nauc_map_at_20_max value: 39.637722313894855 - type: nauc_map_at_20_std value: 22.36990188694816 - type: nauc_map_at_3_diff1 value: 50.9755320771899 - type: nauc_map_at_3_max value: 41.04341520188789 - type: nauc_map_at_3_std value: 20.73111078435784 - type: nauc_map_at_5_diff1 value: 50.29129566530908 - type: nauc_map_at_5_max value: 40.820300878574244 - type: nauc_map_at_5_std value: 21.632271796529135 - type: nauc_mrr_at_1000_diff1 value: 52.9718341979705 - type: nauc_mrr_at_1000_max value: 42.0423808089922 - type: nauc_mrr_at_1000_std value: 20.9799244867451 - type: nauc_mrr_at_100_diff1 value: 52.95863792063511 - type: nauc_mrr_at_100_max value: 42.02576849188542 - type: nauc_mrr_at_100_std value: 20.979507647583475 - type: nauc_mrr_at_10_diff1 value: 53.11373095655967 - type: nauc_mrr_at_10_max value: 42.105421475578744 - type: nauc_mrr_at_10_std value: 20.722957880316024 - type: nauc_mrr_at_1_diff1 value: 58.0289225809443 - type: nauc_mrr_at_1_max value: 43.869928323541096 - type: nauc_mrr_at_1_std value: 18.414910058766996 - type: nauc_mrr_at_20_diff1 value: 53.00870629169023 - type: nauc_mrr_at_20_max value: 42.041577465122714 - type: nauc_mrr_at_20_std value: 20.912051250134024 - type: nauc_mrr_at_3_diff1 value: 53.77971068931782 - type: nauc_mrr_at_3_max value: 42.49181103386521 - type: nauc_mrr_at_3_std value: 20.140203552794727 - type: nauc_mrr_at_5_diff1 value: 53.475770315841466 - type: nauc_mrr_at_5_max value: 42.386857011080636 - type: nauc_mrr_at_5_std value: 20.640622787052557 - type: nauc_ndcg_at_1000_diff1 value: 46.64695418952649 - type: nauc_ndcg_at_1000_max value: 39.66342903978547 - type: nauc_ndcg_at_1000_std value: 26.893939887796396 - type: nauc_ndcg_at_100_diff1 value: 46.530602184859525 - type: nauc_ndcg_at_100_max value: 38.925897286535076 - type: nauc_ndcg_at_100_std value: 25.713663164332328 - type: nauc_ndcg_at_10_diff1 value: 48.23171706325336 - type: nauc_ndcg_at_10_max value: 39.30673245947019 - type: nauc_ndcg_at_10_std value: 22.76919274555658 - type: nauc_ndcg_at_1_diff1 value: 58.0289225809443 - type: nauc_ndcg_at_1_max value: 43.869928323541096 - type: nauc_ndcg_at_1_std value: 18.414910058766996 - type: nauc_ndcg_at_20_diff1 value: 47.13236431440934 - type: nauc_ndcg_at_20_max value: 38.653618449937696 - type: nauc_ndcg_at_20_std value: 23.92208488492667 - type: nauc_ndcg_at_3_diff1 value: 50.624514000929224 - type: nauc_ndcg_at_3_max value: 41.105294702595586 - type: nauc_ndcg_at_3_std value: 21.130087325966326 - type: nauc_ndcg_at_5_diff1 value: 49.69172113431722 - type: nauc_ndcg_at_5_max value: 40.75186606108917 - type: nauc_ndcg_at_5_std value: 22.32512469362848 - type: nauc_precision_at_1000_diff1 value: 23.160094544144965 - type: nauc_precision_at_1000_max value: 28.661321254269932 - type: nauc_precision_at_1000_std value: 38.94759586932569 - type: nauc_precision_at_100_diff1 value: 28.051281498032537 - type: nauc_precision_at_100_max value: 27.9218322543234 - type: nauc_precision_at_100_std value: 32.35825145192534 - type: nauc_precision_at_10_diff1 value: 38.18649250868066 - type: nauc_precision_at_10_max value: 32.670270264955114 - type: nauc_precision_at_10_std value: 24.605022379185275 - type: nauc_precision_at_1_diff1 value: 58.0289225809443 - type: nauc_precision_at_1_max value: 43.869928323541096 - type: nauc_precision_at_1_std value: 18.414910058766996 - type: nauc_precision_at_20_diff1 value: 34.064945849558676 - type: nauc_precision_at_20_max value: 29.99092267433295 - type: nauc_precision_at_20_std value: 27.480111135148235 - type: nauc_precision_at_3_diff1 value: 45.96635358139459 - type: nauc_precision_at_3_max value: 39.19530950679048 - type: nauc_precision_at_3_std value: 22.692664071111174 - type: nauc_precision_at_5_diff1 value: 42.80772429338654 - type: nauc_precision_at_5_max value: 37.30900415179882 - type: nauc_precision_at_5_std value: 24.39700402843405 - type: nauc_recall_at_1000_diff1 value: 23.160094544144993 - type: nauc_recall_at_1000_max value: 28.661321254269968 - type: nauc_recall_at_1000_std value: 38.94759586932564 - type: nauc_recall_at_100_diff1 value: 28.051281498032505 - type: nauc_recall_at_100_max value: 27.921832254323387 - type: nauc_recall_at_100_std value: 32.35825145192532 - type: nauc_recall_at_10_diff1 value: 38.186492508680615 - type: nauc_recall_at_10_max value: 32.6702702649551 - type: nauc_recall_at_10_std value: 24.60502237918523 - type: nauc_recall_at_1_diff1 value: 58.0289225809443 - type: nauc_recall_at_1_max value: 43.869928323541096 - type: nauc_recall_at_1_std value: 18.414910058766996 - type: nauc_recall_at_20_diff1 value: 34.06494584955859 - type: nauc_recall_at_20_max value: 29.990922674332886 - type: nauc_recall_at_20_std value: 27.48011113514819 - type: nauc_recall_at_3_diff1 value: 45.96635358139468 - type: nauc_recall_at_3_max value: 39.19530950679048 - type: nauc_recall_at_3_std value: 22.69266407111118 - type: nauc_recall_at_5_diff1 value: 42.807724293386535 - type: nauc_recall_at_5_max value: 37.30900415179882 - type: nauc_recall_at_5_std value: 24.397004028434065 - type: ndcg_at_1 value: 30.255 - type: ndcg_at_10 value: 27.319 - type: ndcg_at_100 value: 31.384 - type: ndcg_at_1000 value: 33.976 - type: ndcg_at_20 value: 28.811999999999998 - type: ndcg_at_3 value: 23.937 - type: ndcg_at_5 value: 25.665 - type: precision_at_1 value: 30.255 - type: precision_at_10 value: 6.049 - type: precision_at_100 value: 0.931 - type: precision_at_1000 value: 0.128 - type: precision_at_20 value: 3.5060000000000002 - type: precision_at_3 value: 15.097 - type: precision_at_5 value: 10.427999999999999 - type: recall_at_1 value: 15.128 - type: recall_at_10 value: 30.246000000000002 - type: recall_at_100 value: 46.549 - type: recall_at_1000 value: 63.943000000000005 - type: recall_at_20 value: 35.056 - type: recall_at_3 value: 22.645 - type: recall_at_5 value: 26.069 task: type: Retrieval - dataset: config: default name: MTEB HotpotQA (default) revision: ab518f4d6fcca38d87c25209f94beba119d02014 split: test type: mteb/hotpotqa metrics: - type: main_score value: 25.594 - type: map_at_1 value: 13.328999999999999 - type: map_at_10 value: 19.622999999999998 - type: map_at_100 value: 20.443 - type: map_at_1000 value: 20.547 - type: map_at_20 value: 20.061 - type: map_at_3 value: 17.738 - type: map_at_5 value: 18.837 - type: mrr_at_1 value: 26.65766374071573 - type: mrr_at_10 value: 33.638001993505 - type: mrr_at_100 value: 34.36519863954255 - type: mrr_at_1000 value: 34.43147960519581 - type: mrr_at_20 value: 34.05666557215429 - type: mrr_at_3 value: 31.672293495385954 - type: mrr_at_5 value: 32.81003826243525 - type: nauc_map_at_1000_diff1 value: 42.92564602155638 - type: nauc_map_at_1000_max value: 30.93107038343275 - type: nauc_map_at_1000_std value: 20.940662128255337 - type: nauc_map_at_100_diff1 value: 42.936496253513425 - type: nauc_map_at_100_max value: 30.91180851399884 - type: nauc_map_at_100_std value: 20.868468122916305 - type: nauc_map_at_10_diff1 value: 43.187610636795725 - type: nauc_map_at_10_max value: 30.845550958099743 - type: nauc_map_at_10_std value: 19.982030218251648 - type: nauc_map_at_1_diff1 value: 52.309034270608876 - type: nauc_map_at_1_max value: 31.709473219405616 - type: nauc_map_at_1_std value: 14.43851822909325 - type: nauc_map_at_20_diff1 value: 43.03978703190173 - type: nauc_map_at_20_max value: 30.86942775161063 - type: nauc_map_at_20_std value: 20.45899665222483 - type: nauc_map_at_3_diff1 value: 45.41524124545876 - type: nauc_map_at_3_max value: 31.643858919734857 - type: nauc_map_at_3_std value: 18.38563207630924 - type: nauc_map_at_5_diff1 value: 43.901640168008235 - type: nauc_map_at_5_max value: 30.922718235695463 - type: nauc_map_at_5_std value: 19.282329560614524 - type: nauc_mrr_at_1000_diff1 value: 46.58287148422368 - type: nauc_mrr_at_1000_max value: 31.253308486580433 - type: nauc_mrr_at_1000_std value: 17.55539836580872 - type: nauc_mrr_at_100_diff1 value: 46.573742959015114 - type: nauc_mrr_at_100_max value: 31.24499189752598 - type: nauc_mrr_at_100_std value: 17.552024988025664 - type: nauc_mrr_at_10_diff1 value: 46.66525086279671 - type: nauc_mrr_at_10_max value: 31.230319076841546 - type: nauc_mrr_at_10_std value: 17.165865128607045 - type: nauc_mrr_at_1_diff1 value: 52.309034270608876 - type: nauc_mrr_at_1_max value: 31.709473219405616 - type: nauc_mrr_at_1_std value: 14.43851822909325 - type: nauc_mrr_at_20_diff1 value: 46.588047522930275 - type: nauc_mrr_at_20_max value: 31.22329428012019 - type: nauc_mrr_at_20_std value: 17.402623764852763 - type: nauc_mrr_at_3_diff1 value: 47.616857174725034 - type: nauc_mrr_at_3_max value: 31.683285399772792 - type: nauc_mrr_at_3_std value: 16.51070486195255 - type: nauc_mrr_at_5_diff1 value: 46.94600877034942 - type: nauc_mrr_at_5_max value: 31.373228910578714 - type: nauc_mrr_at_5_std value: 16.878955713897653 - type: nauc_ndcg_at_1000_diff1 value: 40.78231860794336 - type: nauc_ndcg_at_1000_max value: 31.269823988995526 - type: nauc_ndcg_at_1000_std value: 25.281682702756076 - type: nauc_ndcg_at_100_diff1 value: 40.85167326240917 - type: nauc_ndcg_at_100_max value: 30.7489966502373 - type: nauc_ndcg_at_100_std value: 24.034633639362347 - type: nauc_ndcg_at_10_diff1 value: 41.64071111332904 - type: nauc_ndcg_at_10_max value: 30.37691893443363 - type: nauc_ndcg_at_10_std value: 20.727400392560106 - type: nauc_ndcg_at_1_diff1 value: 52.309034270608876 - type: nauc_ndcg_at_1_max value: 31.709473219405616 - type: nauc_ndcg_at_1_std value: 14.43851822909325 - type: nauc_ndcg_at_20_diff1 value: 41.20606946396262 - type: nauc_ndcg_at_20_max value: 30.427872627999093 - type: nauc_ndcg_at_20_std value: 21.967337950787567 - type: nauc_ndcg_at_3_diff1 value: 44.769112514565826 - type: nauc_ndcg_at_3_max value: 31.587253638273616 - type: nauc_ndcg_at_3_std value: 18.44337537736502 - type: nauc_ndcg_at_5_diff1 value: 42.843534154265036 - type: nauc_ndcg_at_5_max value: 30.656690565149763 - type: nauc_ndcg_at_5_std value: 19.55710266805375 - type: nauc_precision_at_1000_diff1 value: 20.571918733919397 - type: nauc_precision_at_1000_max value: 25.92113127229473 - type: nauc_precision_at_1000_std value: 39.87020670433835 - type: nauc_precision_at_100_diff1 value: 25.698473178831794 - type: nauc_precision_at_100_max value: 25.352318812011827 - type: nauc_precision_at_100_std value: 32.846058298512595 - type: nauc_precision_at_10_diff1 value: 31.870926938946866 - type: nauc_precision_at_10_max value: 26.926394860214852 - type: nauc_precision_at_10_std value: 24.246520824850055 - type: nauc_precision_at_1_diff1 value: 52.309034270608876 - type: nauc_precision_at_1_max value: 31.709473219405616 - type: nauc_precision_at_1_std value: 14.43851822909325 - type: nauc_precision_at_20_diff1 value: 30.028917020847068 - type: nauc_precision_at_20_max value: 26.597814109731576 - type: nauc_precision_at_20_std value: 27.346420398015525 - type: nauc_precision_at_3_diff1 value: 40.29446945365145 - type: nauc_precision_at_3_max value: 31.134170069020573 - type: nauc_precision_at_3_std value: 20.590305745326894 - type: nauc_precision_at_5_diff1 value: 35.844201172387955 - type: nauc_precision_at_5_max value: 28.624481394118277 - type: nauc_precision_at_5_std value: 22.316991114813813 - type: nauc_recall_at_1000_diff1 value: 20.571918733919446 - type: nauc_recall_at_1000_max value: 25.92113127229482 - type: nauc_recall_at_1000_std value: 39.87020670433847 - type: nauc_recall_at_100_diff1 value: 25.698473178831783 - type: nauc_recall_at_100_max value: 25.352318812011802 - type: nauc_recall_at_100_std value: 32.84605829851259 - type: nauc_recall_at_10_diff1 value: 31.870926938946916 - type: nauc_recall_at_10_max value: 26.92639486021487 - type: nauc_recall_at_10_std value: 24.246520824850094 - type: nauc_recall_at_1_diff1 value: 52.309034270608876 - type: nauc_recall_at_1_max value: 31.709473219405616 - type: nauc_recall_at_1_std value: 14.43851822909325 - type: nauc_recall_at_20_diff1 value: 30.028917020847096 - type: nauc_recall_at_20_max value: 26.597814109731615 - type: nauc_recall_at_20_std value: 27.346420398015525 - type: nauc_recall_at_3_diff1 value: 40.294469453651374 - type: nauc_recall_at_3_max value: 31.134170069020577 - type: nauc_recall_at_3_std value: 20.590305745326855 - type: nauc_recall_at_5_diff1 value: 35.844201172387926 - type: nauc_recall_at_5_max value: 28.624481394118273 - type: nauc_recall_at_5_std value: 22.31699111481382 - type: ndcg_at_1 value: 26.657999999999998 - type: ndcg_at_10 value: 25.594 - type: ndcg_at_100 value: 29.554000000000002 - type: ndcg_at_1000 value: 32.223 - type: ndcg_at_20 value: 27.044 - type: ndcg_at_3 value: 21.976000000000003 - type: ndcg_at_5 value: 23.813000000000002 - type: precision_at_1 value: 26.657999999999998 - type: precision_at_10 value: 5.864 - type: precision_at_100 value: 0.905 - type: precision_at_1000 value: 0.126 - type: precision_at_20 value: 3.4000000000000004 - type: precision_at_3 value: 14.135 - type: precision_at_5 value: 9.931 - type: recall_at_1 value: 13.328999999999999 - type: recall_at_10 value: 29.317999999999998 - type: recall_at_100 value: 45.253 - type: recall_at_1000 value: 63.187000000000005 - type: recall_at_20 value: 33.997 - type: recall_at_3 value: 21.201999999999998 - type: recall_at_5 value: 24.828 task: type: Retrieval - dataset: config: default name: MTEB HotpotQA (default) revision: ab518f4d6fcca38d87c25209f94beba119d02014 split: train type: mteb/hotpotqa metrics: - type: main_score value: 27.361 - type: map_at_1 value: 15.187999999999999 - type: map_at_10 value: 21.143 - type: map_at_100 value: 21.955 - type: map_at_1000 value: 22.055 - type: map_at_20 value: 21.571 - type: map_at_3 value: 19.337 - type: map_at_5 value: 20.328 - type: mrr_at_1 value: 30.375294117647062 - type: mrr_at_10 value: 36.89110037348242 - type: mrr_at_100 value: 37.59161854635361 - type: mrr_at_1000 value: 37.65278972243383 - type: mrr_at_20 value: 37.28204270783653 - type: mrr_at_3 value: 35.073137254900395 - type: mrr_at_5 value: 36.10778431372484 - type: nauc_map_at_1000_diff1 value: 49.349859863214526 - type: nauc_map_at_1000_max value: 41.88467513574528 - type: nauc_map_at_1000_std value: 24.22933543144338 - type: nauc_map_at_100_diff1 value: 49.35940939023821 - type: nauc_map_at_100_max value: 41.862590382078025 - type: nauc_map_at_100_std value: 24.15434461176854 - type: nauc_map_at_10_diff1 value: 49.8538165767892 - type: nauc_map_at_10_max value: 42.07384809374906 - type: nauc_map_at_10_std value: 23.416303368507712 - type: nauc_map_at_1_diff1 value: 59.92575902483264 - type: nauc_map_at_1_max value: 45.47700185129611 - type: nauc_map_at_1_std value: 19.83388552453144 - type: nauc_map_at_20_diff1 value: 49.52738828169819 - type: nauc_map_at_20_max value: 41.91046133599251 - type: nauc_map_at_20_std value: 23.79441500917769 - type: nauc_map_at_3_diff1 value: 51.790844119427824 - type: nauc_map_at_3_max value: 42.999352173314854 - type: nauc_map_at_3_std value: 22.008776106004134 - type: nauc_map_at_5_diff1 value: 50.670375343301934 - type: nauc_map_at_5_max value: 42.55434252366988 - type: nauc_map_at_5_std value: 22.82261757618132 - type: nauc_mrr_at_1000_diff1 value: 54.67988721868572 - type: nauc_mrr_at_1000_max value: 43.91336195991437 - type: nauc_mrr_at_1000_std value: 22.158081050617664 - type: nauc_mrr_at_100_diff1 value: 54.66673307398132 - type: nauc_mrr_at_100_max value: 43.90104349501937 - type: nauc_mrr_at_100_std value: 22.149879738198813 - type: nauc_mrr_at_10_diff1 value: 54.80299708197367 - type: nauc_mrr_at_10_max value: 43.97302396926416 - type: nauc_mrr_at_10_std value: 21.93101088607793 - type: nauc_mrr_at_1_diff1 value: 59.92575902483264 - type: nauc_mrr_at_1_max value: 45.47700185129611 - type: nauc_mrr_at_1_std value: 19.83388552453144 - type: nauc_mrr_at_20_diff1 value: 54.69703296749322 - type: nauc_mrr_at_20_max value: 43.91191360382579 - type: nauc_mrr_at_20_std value: 22.0679377608011 - type: nauc_mrr_at_3_diff1 value: 55.6492096640659 - type: nauc_mrr_at_3_max value: 44.39253185930357 - type: nauc_mrr_at_3_std value: 21.23559827715352 - type: nauc_mrr_at_5_diff1 value: 55.15718817723955 - type: nauc_mrr_at_5_max value: 44.21359056116808 - type: nauc_mrr_at_5_std value: 21.689943072561064 - type: nauc_ndcg_at_1000_diff1 value: 47.08149592464777 - type: nauc_ndcg_at_1000_max value: 41.4824024851649 - type: nauc_ndcg_at_1000_std value: 28.290841883563324 - type: nauc_ndcg_at_100_diff1 value: 47.203241306354165 - type: nauc_ndcg_at_100_max value: 41.01082594665584 - type: nauc_ndcg_at_100_std value: 27.01050126741052 - type: nauc_ndcg_at_10_diff1 value: 48.8574562673927 - type: nauc_ndcg_at_10_max value: 41.59700363422577 - type: nauc_ndcg_at_10_std value: 24.302113156303708 - type: nauc_ndcg_at_1_diff1 value: 59.92575902483264 - type: nauc_ndcg_at_1_max value: 45.47700185129611 - type: nauc_ndcg_at_1_std value: 19.83388552453144 - type: nauc_ndcg_at_20_diff1 value: 48.01876180994126 - type: nauc_ndcg_at_20_max value: 41.16103683691111 - type: nauc_ndcg_at_20_std value: 25.22714091905253 - type: nauc_ndcg_at_3_diff1 value: 51.76534091516943 - type: nauc_ndcg_at_3_max value: 43.066268249034806 - type: nauc_ndcg_at_3_std value: 22.189204956774773 - type: nauc_ndcg_at_5_diff1 value: 50.274763573284176 - type: nauc_ndcg_at_5_max value: 42.46612336859775 - type: nauc_ndcg_at_5_std value: 23.298594291859136 - type: nauc_precision_at_1000_diff1 value: 21.060533862186702 - type: nauc_precision_at_1000_max value: 28.16962240505211 - type: nauc_precision_at_1000_std value: 39.87841510942972 - type: nauc_precision_at_100_diff1 value: 27.25075569238348 - type: nauc_precision_at_100_max value: 29.229681139793666 - type: nauc_precision_at_100_std value: 33.06692753126747 - type: nauc_precision_at_10_diff1 value: 37.35908709421307 - type: nauc_precision_at_10_max value: 35.09768811571614 - type: nauc_precision_at_10_std value: 26.319669286806963 - type: nauc_precision_at_1_diff1 value: 59.92575902483264 - type: nauc_precision_at_1_max value: 45.47700185129611 - type: nauc_precision_at_1_std value: 19.83388552453144 - type: nauc_precision_at_20_diff1 value: 34.02026815173258 - type: nauc_precision_at_20_max value: 32.97155795749605 - type: nauc_precision_at_20_std value: 28.446608120331838 - type: nauc_precision_at_3_diff1 value: 46.625368812766844 - type: nauc_precision_at_3_max value: 41.143493529756945 - type: nauc_precision_at_3_std value: 23.34446397266084 - type: nauc_precision_at_5_diff1 value: 42.596622234501666 - type: nauc_precision_at_5_max value: 38.97638871265979 - type: nauc_precision_at_5_std value: 25.081525286284535 - type: nauc_recall_at_1000_diff1 value: 21.060533862186706 - type: nauc_recall_at_1000_max value: 28.16962240505217 - type: nauc_recall_at_1000_std value: 39.878415109429795 - type: nauc_recall_at_100_diff1 value: 27.250755692383482 - type: nauc_recall_at_100_max value: 29.229681139793694 - type: nauc_recall_at_100_std value: 33.066927531267474 - type: nauc_recall_at_10_diff1 value: 37.35908709421306 - type: nauc_recall_at_10_max value: 35.09768811571614 - type: nauc_recall_at_10_std value: 26.319669286807006 - type: nauc_recall_at_1_diff1 value: 59.92575902483264 - type: nauc_recall_at_1_max value: 45.47700185129611 - type: nauc_recall_at_1_std value: 19.83388552453144 - type: nauc_recall_at_20_diff1 value: 34.02026815173259 - type: nauc_recall_at_20_max value: 32.97155795749605 - type: nauc_recall_at_20_std value: 28.446608120331906 - type: nauc_recall_at_3_diff1 value: 46.62536881276683 - type: nauc_recall_at_3_max value: 41.14349352975691 - type: nauc_recall_at_3_std value: 23.344463972660805 - type: nauc_recall_at_5_diff1 value: 42.59662223450168 - type: nauc_recall_at_5_max value: 38.97638871265983 - type: nauc_recall_at_5_std value: 25.081525286284567 - type: ndcg_at_1 value: 30.375000000000004 - type: ndcg_at_10 value: 27.361 - type: ndcg_at_100 value: 31.247000000000003 - type: ndcg_at_1000 value: 33.785 - type: ndcg_at_20 value: 28.755999999999997 - type: ndcg_at_3 value: 23.905 - type: ndcg_at_5 value: 25.570999999999998 - type: precision_at_1 value: 30.375000000000004 - type: precision_at_10 value: 6.072 - type: precision_at_100 value: 0.9199999999999999 - type: precision_at_1000 value: 0.126 - type: precision_at_20 value: 3.486 - type: precision_at_3 value: 15.033 - type: precision_at_5 value: 10.34 - type: recall_at_1 value: 15.187999999999999 - type: recall_at_10 value: 30.359 - type: recall_at_100 value: 45.985 - type: recall_at_1000 value: 63.001 - type: recall_at_20 value: 34.861 - type: recall_at_3 value: 22.55 - type: recall_at_5 value: 25.851000000000003 task: type: Retrieval - dataset: config: default name: MTEB MSMARCO (default) revision: c5a29a104738b98a9e76336939199e264163d4a0 split: dev type: mteb/msmarco metrics: - type: main_score value: 34.803 - type: map_at_1 value: 18.05 - type: map_at_10 value: 28.698 - type: map_at_100 value: 29.87 - type: map_at_1000 value: 29.937 - type: map_at_20 value: 29.407 - type: map_at_3 value: 25.211 - type: map_at_5 value: 27.256999999999998 - type: mrr_at_1 value: 18.595988538681947 - type: mrr_at_10 value: 29.313929822167506 - type: mrr_at_100 value: 30.43456132167563 - type: mrr_at_1000 value: 30.496264584797384 - type: mrr_at_20 value: 29.994260130208367 - type: mrr_at_3 value: 25.847659980897618 - type: mrr_at_5 value: 27.90926456542496 - type: nauc_map_at_1000_diff1 value: 30.967805122512292 - type: nauc_map_at_1000_max value: 9.508356994054424 - type: nauc_map_at_1000_std value: -16.7647780708821 - type: nauc_map_at_100_diff1 value: 30.959590208362474 - type: nauc_map_at_100_max value: 9.519514909485395 - type: nauc_map_at_100_std value: -16.752225883753628 - type: nauc_map_at_10_diff1 value: 30.961638427898 - type: nauc_map_at_10_max value: 9.032394524880466 - type: nauc_map_at_10_std value: -17.58469702660575 - type: nauc_map_at_1_diff1 value: 36.68684868509966 - type: nauc_map_at_1_max value: 8.006788300165026 - type: nauc_map_at_1_std value: -16.453929215948566 - type: nauc_map_at_20_diff1 value: 30.914160804266828 - type: nauc_map_at_20_max value: 9.383047052491237 - type: nauc_map_at_20_std value: -17.09908974160602 - type: nauc_map_at_3_diff1 value: 32.00864052480307 - type: nauc_map_at_3_max value: 7.9346719904912755 - type: nauc_map_at_3_std value: -18.363334919131873 - type: nauc_map_at_5_diff1 value: 31.30004793090547 - type: nauc_map_at_5_max value: 8.32231303754107 - type: nauc_map_at_5_std value: -18.244863888195106 - type: nauc_mrr_at_1000_diff1 value: 30.925580995966435 - type: nauc_mrr_at_1000_max value: 9.570609811089142 - type: nauc_mrr_at_1000_std value: -16.416887518960856 - type: nauc_mrr_at_100_diff1 value: 30.914394357734505 - type: nauc_mrr_at_100_max value: 9.586927921368241 - type: nauc_mrr_at_100_std value: -16.39712616782138 - type: nauc_mrr_at_10_diff1 value: 30.92742800307562 - type: nauc_mrr_at_10_max value: 9.197220826362587 - type: nauc_mrr_at_10_std value: -17.151747008284875 - type: nauc_mrr_at_1_diff1 value: 36.60106731450987 - type: nauc_mrr_at_1_max value: 8.037370217799388 - type: nauc_mrr_at_1_std value: -16.29920870158755 - type: nauc_mrr_at_20_diff1 value: 30.873870247447528 - type: nauc_mrr_at_20_max value: 9.489994351963741 - type: nauc_mrr_at_20_std value: -16.695446595293273 - type: nauc_mrr_at_3_diff1 value: 31.962676684893193 - type: nauc_mrr_at_3_max value: 7.902980731667526 - type: nauc_mrr_at_3_std value: -18.087972700882023 - type: nauc_mrr_at_5_diff1 value: 31.266029079987334 - type: nauc_mrr_at_5_max value: 8.43829728480481 - type: nauc_mrr_at_5_std value: -17.830313089048254 - type: nauc_ndcg_at_1000_diff1 value: 28.800055526073052 - type: nauc_ndcg_at_1000_max value: 12.150167860614443 - type: nauc_ndcg_at_1000_std value: -13.25905395516576 - type: nauc_ndcg_at_100_diff1 value: 28.55273062021815 - type: nauc_ndcg_at_100_max value: 12.604290774389284 - type: nauc_ndcg_at_100_std value: -12.640286042753369 - type: nauc_ndcg_at_10_diff1 value: 28.656702169882305 - type: nauc_ndcg_at_10_max value: 10.390975552193304 - type: nauc_ndcg_at_10_std value: -16.979914789104274 - type: nauc_ndcg_at_1_diff1 value: 36.60106731450987 - type: nauc_ndcg_at_1_max value: 7.999297342297694 - type: nauc_ndcg_at_1_std value: -16.37844144249649 - type: nauc_ndcg_at_20_diff1 value: 28.39512508390008 - type: nauc_ndcg_at_20_max value: 11.670038381067469 - type: nauc_ndcg_at_20_std value: -15.19270139716777 - type: nauc_ndcg_at_3_diff1 value: 30.720892075394424 - type: nauc_ndcg_at_3_max value: 7.944044047190646 - type: nauc_ndcg_at_3_std value: -18.776183934337002 - type: nauc_ndcg_at_5_diff1 value: 29.499593527403515 - type: nauc_ndcg_at_5_max value: 8.70632138099665 - type: nauc_ndcg_at_5_std value: -18.516401671537196 - type: nauc_precision_at_1000_diff1 value: -0.8237646361863551 - type: nauc_precision_at_1000_max value: 29.65949086980757 - type: nauc_precision_at_1000_std value: 27.545476721494193 - type: nauc_precision_at_100_diff1 value: 12.284529508091742 - type: nauc_precision_at_100_max value: 29.413232419844444 - type: nauc_precision_at_100_std value: 17.196307006886304 - type: nauc_precision_at_10_diff1 value: 21.3879709585264 - type: nauc_precision_at_10_max value: 14.806399276392005 - type: nauc_precision_at_10_std value: -13.89189147461678 - type: nauc_precision_at_1_diff1 value: 36.60106731450987 - type: nauc_precision_at_1_max value: 7.999297342297694 - type: nauc_precision_at_1_std value: -16.37844144249649 - type: nauc_precision_at_20_diff1 value: 18.793745272557334 - type: nauc_precision_at_20_max value: 20.52958902353019 - type: nauc_precision_at_20_std value: -5.831301138585665 - type: nauc_precision_at_3_diff1 value: 27.51302998292356 - type: nauc_precision_at_3_max value: 8.124801342511773 - type: nauc_precision_at_3_std value: -19.653828947390405 - type: nauc_precision_at_5_diff1 value: 24.648428234894187 - type: nauc_precision_at_5_max value: 9.970901593970794 - type: nauc_precision_at_5_std value: -18.870802823554662 - type: nauc_recall_at_1000_diff1 value: 3.1535887573039503 - type: nauc_recall_at_1000_max value: 49.483153585320665 - type: nauc_recall_at_1000_std value: 48.068469198499955 - type: nauc_recall_at_100_diff1 value: 16.09114265175475 - type: nauc_recall_at_100_max value: 31.63592256685488 - type: nauc_recall_at_100_std value: 17.262073141805402 - type: nauc_recall_at_10_diff1 value: 21.74285904516543 - type: nauc_recall_at_10_max value: 14.233113457215413 - type: nauc_recall_at_10_std value: -15.00141123225369 - type: nauc_recall_at_1_diff1 value: 36.68684868509966 - type: nauc_recall_at_1_max value: 8.006788300165026 - type: nauc_recall_at_1_std value: -16.453929215948566 - type: nauc_recall_at_20_diff1 value: 19.721186534513574 - type: nauc_recall_at_20_max value: 19.839943733839586 - type: nauc_recall_at_20_std value: -7.528794529976447 - type: nauc_recall_at_3_diff1 value: 27.150549688069308 - type: nauc_recall_at_3_max value: 7.915443441945287 - type: nauc_recall_at_3_std value: -19.87191884581972 - type: nauc_recall_at_5_diff1 value: 24.49887160612282 - type: nauc_recall_at_5_max value: 9.52108917622074 - type: nauc_recall_at_5_std value: -19.231477971502706 - type: ndcg_at_1 value: 18.596 - type: ndcg_at_10 value: 34.803 - type: ndcg_at_100 value: 40.544000000000004 - type: ndcg_at_1000 value: 42.260999999999996 - type: ndcg_at_20 value: 37.32 - type: ndcg_at_3 value: 27.71 - type: ndcg_at_5 value: 31.385999999999996 - type: precision_at_1 value: 18.596 - type: precision_at_10 value: 5.595 - type: precision_at_100 value: 0.848 - type: precision_at_1000 value: 0.099 - type: precision_at_20 value: 3.32 - type: precision_at_3 value: 11.858 - type: precision_at_5 value: 8.988999999999999 - type: recall_at_1 value: 18.05 - type: recall_at_10 value: 53.516 - type: recall_at_100 value: 80.289 - type: recall_at_1000 value: 93.512 - type: recall_at_20 value: 63.288999999999994 - type: recall_at_3 value: 34.306 - type: recall_at_5 value: 43.145 task: type: Retrieval - dataset: config: default name: MTEB MSMARCO (default) revision: c5a29a104738b98a9e76336939199e264163d4a0 split: test type: mteb/msmarco metrics: - type: main_score value: 59.06400000000001 - type: map_at_1 value: 1.69 - type: map_at_10 value: 10.755 - type: map_at_100 value: 28.62 - type: map_at_1000 value: 34.429 - type: map_at_20 value: 16.674 - type: map_at_3 value: 4.005 - type: map_at_5 value: 6.332 - type: mrr_at_1 value: 88.37209302325581 - type: mrr_at_10 value: 91.47286821705424 - type: mrr_at_100 value: 91.56261235995456 - type: mrr_at_1000 value: 91.56261235995456 - type: mrr_at_20 value: 91.47286821705424 - type: mrr_at_3 value: 91.47286821705424 - type: mrr_at_5 value: 91.47286821705424 - type: nauc_map_at_1000_diff1 value: -58.004048824154644 - type: nauc_map_at_1000_max value: 34.44700621046442 - type: nauc_map_at_1000_std value: 66.90595473840271 - type: nauc_map_at_100_diff1 value: -45.770189406894886 - type: nauc_map_at_100_max value: 26.211654802825322 - type: nauc_map_at_100_std value: 56.080139344538956 - type: nauc_map_at_10_diff1 value: -10.97871068331086 - type: nauc_map_at_10_max value: 28.539368064747517 - type: nauc_map_at_10_std value: 21.028635665204536 - type: nauc_map_at_1_diff1 value: 35.97114363073452 - type: nauc_map_at_1_max value: -10.574909183137414 - type: nauc_map_at_1_std value: -16.11433179567385 - type: nauc_map_at_20_diff1 value: -17.439484208262535 - type: nauc_map_at_20_max value: 25.1778779774871 - type: nauc_map_at_20_std value: 28.832377812578258 - type: nauc_map_at_3_diff1 value: 7.501651006902326 - type: nauc_map_at_3_max value: 19.130202694438417 - type: nauc_map_at_3_std value: 6.940777942691115 - type: nauc_map_at_5_diff1 value: -1.9207947577929207 - type: nauc_map_at_5_max value: 24.72479488530524 - type: nauc_map_at_5_std value: 12.662402148436808 - type: nauc_mrr_at_1000_diff1 value: -31.38940176778026 - type: nauc_mrr_at_1000_max value: 27.91929945678086 - type: nauc_mrr_at_1000_std value: 63.98020520396501 - type: nauc_mrr_at_100_diff1 value: -31.38940176778026 - type: nauc_mrr_at_100_max value: 27.91929945678086 - type: nauc_mrr_at_100_std value: 63.98020520396501 - type: nauc_mrr_at_10_diff1 value: -31.98154638739888 - type: nauc_mrr_at_10_max value: 28.691575876582025 - type: nauc_mrr_at_10_std value: 64.15203937521413 - type: nauc_mrr_at_1_diff1 value: -18.33986985937143 - type: nauc_mrr_at_1_max value: -3.5658506202108935 - type: nauc_mrr_at_1_std value: 57.265329172858316 - type: nauc_mrr_at_20_diff1 value: -31.98154638739888 - type: nauc_mrr_at_20_max value: 28.691575876582025 - type: nauc_mrr_at_20_std value: 64.15203937521413 - type: nauc_mrr_at_3_diff1 value: -31.98154638739888 - type: nauc_mrr_at_3_max value: 28.691575876582025 - type: nauc_mrr_at_3_std value: 64.15203937521413 - type: nauc_mrr_at_5_diff1 value: -31.98154638739888 - type: nauc_mrr_at_5_max value: 28.691575876582025 - type: nauc_mrr_at_5_std value: 64.15203937521413 - type: nauc_ndcg_at_1000_diff1 value: -64.4009360298104 - type: nauc_ndcg_at_1000_max value: 47.73173186354194 - type: nauc_ndcg_at_1000_std value: 67.76224269862708 - type: nauc_ndcg_at_100_diff1 value: -55.81170448144629 - type: nauc_ndcg_at_100_max value: 45.76842611746515 - type: nauc_ndcg_at_100_std value: 65.75760535333559 - type: nauc_ndcg_at_10_diff1 value: -48.26548478088951 - type: nauc_ndcg_at_10_max value: 47.221554778624736 - type: nauc_ndcg_at_10_std value: 48.66818342066006 - type: nauc_ndcg_at_1_diff1 value: 22.361573521914167 - type: nauc_ndcg_at_1_max value: -4.095691143820581 - type: nauc_ndcg_at_1_std value: -22.86231879814137 - type: nauc_ndcg_at_20_diff1 value: -52.771674683458514 - type: nauc_ndcg_at_20_max value: 39.94057642569563 - type: nauc_ndcg_at_20_std value: 55.44432532491158 - type: nauc_ndcg_at_3_diff1 value: -15.072453465017949 - type: nauc_ndcg_at_3_max value: 36.22918826732502 - type: nauc_ndcg_at_3_std value: 16.823145415489098 - type: nauc_ndcg_at_5_diff1 value: -32.572693237589036 - type: nauc_ndcg_at_5_max value: 44.30557738022862 - type: nauc_ndcg_at_5_std value: 30.75861792737064 - type: nauc_precision_at_1000_diff1 value: -61.90365106233373 - type: nauc_precision_at_1000_max value: 29.37450084944317 - type: nauc_precision_at_1000_std value: 51.15294977255612 - type: nauc_precision_at_100_diff1 value: -64.87811325128291 - type: nauc_precision_at_100_max value: 27.082351018049334 - type: nauc_precision_at_100_std value: 59.21347841222029 - type: nauc_precision_at_10_diff1 value: -73.17039435952445 - type: nauc_precision_at_10_max value: 50.03963485425603 - type: nauc_precision_at_10_std value: 74.7760482349427 - type: nauc_precision_at_1_diff1 value: -18.33986985937143 - type: nauc_precision_at_1_max value: -3.5658506202108935 - type: nauc_precision_at_1_std value: 57.265329172858316 - type: nauc_precision_at_20_diff1 value: -69.19781045685487 - type: nauc_precision_at_20_max value: 35.34563871429891 - type: nauc_precision_at_20_std value: 69.73611685460025 - type: nauc_precision_at_3_diff1 value: -65.76154675431599 - type: nauc_precision_at_3_max value: 58.774704910679354 - type: nauc_precision_at_3_std value: 81.26148034443878 - type: nauc_precision_at_5_diff1 value: -78.31465612075088 - type: nauc_precision_at_5_max value: 59.02550474452744 - type: nauc_precision_at_5_std value: 83.37770095414346 - type: nauc_recall_at_1000_diff1 value: -71.70183001436035 - type: nauc_recall_at_1000_max value: 39.784139396042576 - type: nauc_recall_at_1000_std value: 69.52333709786 - type: nauc_recall_at_100_diff1 value: -39.867533213723476 - type: nauc_recall_at_100_max value: 25.88219867077632 - type: nauc_recall_at_100_std value: 50.413431837035105 - type: nauc_recall_at_10_diff1 value: -4.726447649175431 - type: nauc_recall_at_10_max value: 27.630825828715384 - type: nauc_recall_at_10_std value: 14.42376069507755 - type: nauc_recall_at_1_diff1 value: 35.97114363073452 - type: nauc_recall_at_1_max value: -10.574909183137414 - type: nauc_recall_at_1_std value: -16.11433179567385 - type: nauc_recall_at_20_diff1 value: -8.787671293237674 - type: nauc_recall_at_20_max value: 25.207984756226242 - type: nauc_recall_at_20_std value: 20.728749504224318 - type: nauc_recall_at_3_diff1 value: 9.441972498951067 - type: nauc_recall_at_3_max value: 22.118880183280638 - type: nauc_recall_at_3_std value: 4.297339270917102 - type: nauc_recall_at_5_diff1 value: 4.072468434627269 - type: nauc_recall_at_5_max value: 24.12093962060067 - type: nauc_recall_at_5_std value: 6.723457577572802 - type: ndcg_at_1 value: 68.217 - type: ndcg_at_10 value: 59.06400000000001 - type: ndcg_at_100 value: 51.28 - type: ndcg_at_1000 value: 57.745999999999995 - type: ndcg_at_20 value: 55.969 - type: ndcg_at_3 value: 63.207 - type: ndcg_at_5 value: 62.099000000000004 - type: precision_at_1 value: 88.372 - type: precision_at_10 value: 68.372 - type: precision_at_100 value: 31.349 - type: precision_at_1000 value: 5.858 - type: precision_at_20 value: 59.767 - type: precision_at_3 value: 77.519 - type: precision_at_5 value: 75.81400000000001 - type: recall_at_1 value: 1.69 - type: recall_at_10 value: 11.779 - type: recall_at_100 value: 38.83 - type: recall_at_1000 value: 62.572 - type: recall_at_20 value: 19.155 - type: recall_at_3 value: 4.143 - type: recall_at_5 value: 6.801 task: type: Retrieval - dataset: config: default name: MTEB MSMARCO (default) revision: c5a29a104738b98a9e76336939199e264163d4a0 split: train type: mteb/msmarco metrics: - type: main_score value: 33.715 - type: map_at_1 value: 16.719 - type: map_at_10 value: 27.433000000000003 - type: map_at_100 value: 28.703 - type: map_at_1000 value: 28.767 - type: map_at_20 value: 28.191 - type: map_at_3 value: 23.826 - type: map_at_5 value: 25.849 - type: mrr_at_1 value: 17.20904523212557 - type: mrr_at_10 value: 27.92367929563414 - type: mrr_at_100 value: 29.152633780765758 - type: mrr_at_1000 value: 29.21099712739743 - type: mrr_at_20 value: 28.66210180739731 - type: mrr_at_3 value: 24.356466026020737 - type: mrr_at_5 value: 26.365397526652018 - type: nauc_map_at_1000_diff1 value: 29.63353838791719 - type: nauc_map_at_1000_max value: 5.7454487966009555 - type: nauc_map_at_1000_std value: -18.063069395275722 - type: nauc_map_at_100_diff1 value: 29.616549284785748 - type: nauc_map_at_100_max value: 5.760919702571223 - type: nauc_map_at_100_std value: -18.044574507278362 - type: nauc_map_at_10_diff1 value: 29.700954720537165 - type: nauc_map_at_10_max value: 5.424404826187661 - type: nauc_map_at_10_std value: -18.842507893783537 - type: nauc_map_at_1_diff1 value: 35.211396613278865 - type: nauc_map_at_1_max value: 3.1343744942229086 - type: nauc_map_at_1_std value: -18.170256518161427 - type: nauc_map_at_20_diff1 value: 29.613406086113486 - type: nauc_map_at_20_max value: 5.641627873928245 - type: nauc_map_at_20_std value: -18.396499430312748 - type: nauc_map_at_3_diff1 value: 30.603238239884483 - type: nauc_map_at_3_max value: 4.453242887818738 - type: nauc_map_at_3_std value: -19.268024232660473 - type: nauc_map_at_5_diff1 value: 30.003625205143397 - type: nauc_map_at_5_max value: 4.969401253866233 - type: nauc_map_at_5_std value: -19.276106426526184 - type: nauc_mrr_at_1000_diff1 value: 29.486572453576716 - type: nauc_mrr_at_1000_max value: 5.693116182861802 - type: nauc_mrr_at_1000_std value: -17.917227560489565 - type: nauc_mrr_at_100_diff1 value: 29.468797895964215 - type: nauc_mrr_at_100_max value: 5.710921844344999 - type: nauc_mrr_at_100_std value: -17.89511910086844 - type: nauc_mrr_at_10_diff1 value: 29.53746647566686 - type: nauc_mrr_at_10_max value: 5.406885859896506 - type: nauc_mrr_at_10_std value: -18.644478337030773 - type: nauc_mrr_at_1_diff1 value: 34.91427458267835 - type: nauc_mrr_at_1_max value: 3.2309227517846852 - type: nauc_mrr_at_1_std value: -18.080446645652778 - type: nauc_mrr_at_20_diff1 value: 29.45773578143679 - type: nauc_mrr_at_20_max value: 5.60850730413169 - type: nauc_mrr_at_20_std value: -18.21575909250342 - type: nauc_mrr_at_3_diff1 value: 30.40915710544333 - type: nauc_mrr_at_3_max value: 4.474773651459147 - type: nauc_mrr_at_3_std value: -19.11200967355029 - type: nauc_mrr_at_5_diff1 value: 29.817993784330003 - type: nauc_mrr_at_5_max value: 4.973386662596821 - type: nauc_mrr_at_5_std value: -19.08901246396465 - type: nauc_ndcg_at_1000_diff1 value: 27.742972640187336 - type: nauc_ndcg_at_1000_max value: 7.916917003519648 - type: nauc_ndcg_at_1000_std value: -15.17896601259802 - type: nauc_ndcg_at_100_diff1 value: 27.238853230669203 - type: nauc_ndcg_at_100_max value: 8.460940793652366 - type: nauc_ndcg_at_100_std value: -14.261896738662886 - type: nauc_ndcg_at_10_diff1 value: 27.66904868796311 - type: nauc_ndcg_at_10_max value: 6.735176820125588 - type: nauc_ndcg_at_10_std value: -18.448078157577832 - type: nauc_ndcg_at_1_diff1 value: 34.95550568195721 - type: nauc_ndcg_at_1_max value: 3.2125833396044503 - type: nauc_ndcg_at_1_std value: -18.10007817354091 - type: nauc_ndcg_at_20_diff1 value: 27.301933503569487 - type: nauc_ndcg_at_20_max value: 7.536489988333836 - type: nauc_ndcg_at_20_std value: -16.842402929106534 - type: nauc_ndcg_at_3_diff1 value: 29.372635951851155 - type: nauc_ndcg_at_3_max value: 4.820361216791387 - type: nauc_ndcg_at_3_std value: -19.513367064010172 - type: nauc_ndcg_at_5_diff1 value: 28.37372030928328 - type: nauc_ndcg_at_5_max value: 5.678798799104905 - type: nauc_ndcg_at_5_std value: -19.501286961437117 - type: nauc_precision_at_1000_diff1 value: -3.510350547767326 - type: nauc_precision_at_1000_max value: 21.377749750431178 - type: nauc_precision_at_1000_std value: 22.27774613662246 - type: nauc_precision_at_100_diff1 value: 8.87592055675654 - type: nauc_precision_at_100_max value: 23.073116570870873 - type: nauc_precision_at_100_std value: 15.055227218558281 - type: nauc_precision_at_10_diff1 value: 21.052874039568565 - type: nauc_precision_at_10_max value: 10.660428704804987 - type: nauc_precision_at_10_std value: -16.430957222899334 - type: nauc_precision_at_1_diff1 value: 34.95550568195721 - type: nauc_precision_at_1_max value: 3.2125833396044503 - type: nauc_precision_at_1_std value: -18.10007817354091 - type: nauc_precision_at_20_diff1 value: 17.989882712589928 - type: nauc_precision_at_20_max value: 14.07438846286306 - type: nauc_precision_at_20_std value: -9.440997023749803 - type: nauc_precision_at_3_diff1 value: 26.086905823987642 - type: nauc_precision_at_3_max value: 5.8656016048393225 - type: nauc_precision_at_3_std value: -20.040710978552877 - type: nauc_precision_at_5_diff1 value: 23.81277688113566 - type: nauc_precision_at_5_max value: 7.673783195436691 - type: nauc_precision_at_5_std value: -19.793547751043462 - type: nauc_recall_at_1000_diff1 value: 2.1874443524235154 - type: nauc_recall_at_1000_max value: 51.05037398799104 - type: nauc_recall_at_1000_std value: 53.35202253619663 - type: nauc_recall_at_100_diff1 value: 13.542586793678119 - type: nauc_recall_at_100_max value: 27.18200902894963 - type: nauc_recall_at_100_std value: 16.971437674746966 - type: nauc_recall_at_10_diff1 value: 21.973446928460568 - type: nauc_recall_at_10_max value: 10.41568392262844 - type: nauc_recall_at_10_std value: -16.967676205416033 - type: nauc_recall_at_1_diff1 value: 35.211396613278865 - type: nauc_recall_at_1_max value: 3.1343744942229086 - type: nauc_recall_at_1_std value: -18.170256518161427 - type: nauc_recall_at_20_diff1 value: 19.711023045178955 - type: nauc_recall_at_20_max value: 14.095426841286946 - type: nauc_recall_at_20_std value: -10.417724099405264 - type: nauc_recall_at_3_diff1 value: 26.246688921271716 - type: nauc_recall_at_3_max value: 5.649635052724743 - type: nauc_recall_at_3_std value: -20.04554246302986 - type: nauc_recall_at_5_diff1 value: 24.168809764476798 - type: nauc_recall_at_5_max value: 7.399478058484081 - type: nauc_recall_at_5_std value: -19.9514394901081 - type: ndcg_at_1 value: 17.201 - type: ndcg_at_10 value: 33.715 - type: ndcg_at_100 value: 40.025 - type: ndcg_at_1000 value: 41.685 - type: ndcg_at_20 value: 36.424 - type: ndcg_at_3 value: 26.31 - type: ndcg_at_5 value: 29.93 - type: precision_at_1 value: 17.201 - type: precision_at_10 value: 5.541 - type: precision_at_100 value: 0.8699999999999999 - type: precision_at_1000 value: 0.101 - type: precision_at_20 value: 3.329 - type: precision_at_3 value: 11.37 - type: precision_at_5 value: 8.649 - type: recall_at_1 value: 16.719 - type: recall_at_10 value: 53.248 - type: recall_at_100 value: 82.863 - type: recall_at_1000 value: 95.721 - type: recall_at_20 value: 63.797000000000004 - type: recall_at_3 value: 32.984 - type: recall_at_5 value: 41.702 task: type: Retrieval - dataset: config: default name: MTEB MindSmallReranking (default) revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7 split: test type: mteb/mind_small metrics: - type: main_score value: 31.370055361148335 - type: map value: 31.370055361148335 - type: mrr value: 32.37440631143583 - type: nAUC_map_diff1 value: 14.695683399887827 - type: nAUC_map_max value: -23.4492856197865 - type: nAUC_map_std value: -8.885861431808289 - type: nAUC_mrr_diff1 value: 13.34849840135066 - type: nAUC_mrr_max value: -18.199354674002286 - type: nAUC_mrr_std value: -6.5654110123418565 task: type: Reranking - dataset: config: default name: MTEB NFCorpus (default) revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 split: test type: mteb/nfcorpus metrics: - type: main_score value: 29.754 - type: map_at_1 value: 4.567 - type: map_at_10 value: 10.027999999999999 - type: map_at_100 value: 13.431000000000001 - type: map_at_1000 value: 14.92 - type: map_at_20 value: 11.431 - type: map_at_3 value: 7.224 - type: map_at_5 value: 8.386000000000001 - type: mrr_at_1 value: 37.77089783281733 - type: mrr_at_10 value: 47.909725293626224 - type: mrr_at_100 value: 48.65139591019661 - type: mrr_at_1000 value: 48.69916406344342 - type: mrr_at_20 value: 48.367288102536705 - type: mrr_at_3 value: 46.49122807017543 - type: mrr_at_5 value: 47.0639834881321 - type: nauc_map_at_1000_diff1 value: 14.085748354172512 - type: nauc_map_at_1000_max value: 28.172950420215592 - type: nauc_map_at_1000_std value: 18.144213868106675 - type: nauc_map_at_100_diff1 value: 14.589976235435408 - type: nauc_map_at_100_max value: 26.576946199479153 - type: nauc_map_at_100_std value: 13.843137874037378 - type: nauc_map_at_10_diff1 value: 17.8495149344614 - type: nauc_map_at_10_max value: 18.82526312062093 - type: nauc_map_at_10_std value: 2.86785487558237 - type: nauc_map_at_1_diff1 value: 28.714904063353853 - type: nauc_map_at_1_max value: -3.169789138203512 - type: nauc_map_at_1_std value: -15.236185778974088 - type: nauc_map_at_20_diff1 value: 15.794494377362259 - type: nauc_map_at_20_max value: 22.383385559577366 - type: nauc_map_at_20_std value: 7.50366878545232 - type: nauc_map_at_3_diff1 value: 20.770182594071223 - type: nauc_map_at_3_max value: 9.50930632250425 - type: nauc_map_at_3_std value: -7.066408370212426 - type: nauc_map_at_5_diff1 value: 20.197174149206827 - type: nauc_map_at_5_max value: 14.021867267435159 - type: nauc_map_at_5_std value: -3.409836969018499 - type: nauc_mrr_at_1000_diff1 value: 22.986158985525027 - type: nauc_mrr_at_1000_max value: 37.185817395515556 - type: nauc_mrr_at_1000_std value: 23.706769752670958 - type: nauc_mrr_at_100_diff1 value: 22.94515989033249 - type: nauc_mrr_at_100_max value: 37.209904708113726 - type: nauc_mrr_at_100_std value: 23.746222614427094 - type: nauc_mrr_at_10_diff1 value: 22.81681858431028 - type: nauc_mrr_at_10_max value: 37.10239557383066 - type: nauc_mrr_at_10_std value: 23.663887285644442 - type: nauc_mrr_at_1_diff1 value: 27.148124088268162 - type: nauc_mrr_at_1_max value: 26.428295034510924 - type: nauc_mrr_at_1_std value: 11.84295790231629 - type: nauc_mrr_at_20_diff1 value: 22.946866772785427 - type: nauc_mrr_at_20_max value: 37.38173901985591 - type: nauc_mrr_at_20_std value: 23.776573546658362 - type: nauc_mrr_at_3_diff1 value: 23.563086894697765 - type: nauc_mrr_at_3_max value: 36.719834088530625 - type: nauc_mrr_at_3_std value: 22.817102267109433 - type: nauc_mrr_at_5_diff1 value: 23.131059142837216 - type: nauc_mrr_at_5_max value: 36.731326253130234 - type: nauc_mrr_at_5_std value: 23.069191250755768 - type: nauc_ndcg_at_1000_diff1 value: 15.95138362626174 - type: nauc_ndcg_at_1000_max value: 44.33022365744542 - type: nauc_ndcg_at_1000_std value: 35.72521592020635 - type: nauc_ndcg_at_100_diff1 value: 14.575407179245719 - type: nauc_ndcg_at_100_max value: 36.95692885691867 - type: nauc_ndcg_at_100_std value: 26.80789282599495 - type: nauc_ndcg_at_10_diff1 value: 15.594400206553201 - type: nauc_ndcg_at_10_max value: 35.348349357904375 - type: nauc_ndcg_at_10_std value: 25.492218422553346 - type: nauc_ndcg_at_1_diff1 value: 28.078477674957707 - type: nauc_ndcg_at_1_max value: 22.554521470143378 - type: nauc_ndcg_at_1_std value: 11.86138448259433 - type: nauc_ndcg_at_20_diff1 value: 13.880875656746857 - type: nauc_ndcg_at_20_max value: 34.45274753229036 - type: nauc_ndcg_at_20_std value: 25.804360762917444 - type: nauc_ndcg_at_3_diff1 value: 16.85604207727626 - type: nauc_ndcg_at_3_max value: 32.72858990186108 - type: nauc_ndcg_at_3_std value: 19.852261713002537 - type: nauc_ndcg_at_5_diff1 value: 16.70613670641336 - type: nauc_ndcg_at_5_max value: 35.398452792018126 - type: nauc_ndcg_at_5_std value: 22.547276511653237 - type: nauc_precision_at_1000_diff1 value: -3.9205652143854004 - type: nauc_precision_at_1000_max value: 11.56760575704433 - type: nauc_precision_at_1000_std value: 29.969843364807815 - type: nauc_precision_at_100_diff1 value: -3.2791504425228504 - type: nauc_precision_at_100_max value: 23.105292891322655 - type: nauc_precision_at_100_std value: 38.24772180179586 - type: nauc_precision_at_10_diff1 value: 4.077038954732221 - type: nauc_precision_at_10_max value: 37.97114306278218 - type: nauc_precision_at_10_std value: 35.33785209505845 - type: nauc_precision_at_1_diff1 value: 27.148124088268162 - type: nauc_precision_at_1_max value: 26.428295034510924 - type: nauc_precision_at_1_std value: 11.84295790231629 - type: nauc_precision_at_20_diff1 value: -0.07715823284655204 - type: nauc_precision_at_20_max value: 32.32150505807481 - type: nauc_precision_at_20_std value: 35.93434300391347 - type: nauc_precision_at_3_diff1 value: 7.47311429058567 - type: nauc_precision_at_3_max value: 38.53868770942289 - type: nauc_precision_at_3_std value: 25.65319420988019 - type: nauc_precision_at_5_diff1 value: 6.8117959185149655 - type: nauc_precision_at_5_max value: 40.78936637790633 - type: nauc_precision_at_5_std value: 29.29130912546785 - type: nauc_recall_at_1000_diff1 value: 4.8295758031310285 - type: nauc_recall_at_1000_max value: 21.69239465610227 - type: nauc_recall_at_1000_std value: 18.948672947229195 - type: nauc_recall_at_100_diff1 value: 7.988051122946883 - type: nauc_recall_at_100_max value: 22.98766591265696 - type: nauc_recall_at_100_std value: 15.290463328578074 - type: nauc_recall_at_10_diff1 value: 13.365133585158528 - type: nauc_recall_at_10_max value: 20.732053918735378 - type: nauc_recall_at_10_std value: 6.370291758470967 - type: nauc_recall_at_1_diff1 value: 28.714904063353853 - type: nauc_recall_at_1_max value: -3.169789138203512 - type: nauc_recall_at_1_std value: -15.236185778974088 - type: nauc_recall_at_20_diff1 value: 8.20907091382773 - type: nauc_recall_at_20_max value: 21.274336782803303 - type: nauc_recall_at_20_std value: 11.112201780365108 - type: nauc_recall_at_3_diff1 value: 18.540225578507606 - type: nauc_recall_at_3_max value: 15.389377575957791 - type: nauc_recall_at_3_std value: -3.7988750765195216 - type: nauc_recall_at_5_diff1 value: 16.698902011414855 - type: nauc_recall_at_5_max value: 17.822073045346276 - type: nauc_recall_at_5_std value: -0.07358971200090962 - type: ndcg_at_1 value: 35.759 - type: ndcg_at_10 value: 29.754 - type: ndcg_at_100 value: 28.652 - type: ndcg_at_1000 value: 37.912 - type: ndcg_at_20 value: 28.366000000000003 - type: ndcg_at_3 value: 34.504000000000005 - type: ndcg_at_5 value: 31.996000000000002 - type: precision_at_1 value: 37.771 - type: precision_at_10 value: 23.064999999999998 - type: precision_at_100 value: 7.95 - type: precision_at_1000 value: 2.0820000000000003 - type: precision_at_20 value: 17.601 - type: precision_at_3 value: 33.745999999999995 - type: precision_at_5 value: 28.607 - type: recall_at_1 value: 4.567 - type: recall_at_10 value: 13.782 - type: recall_at_100 value: 31.269000000000002 - type: recall_at_1000 value: 65.342 - type: recall_at_20 value: 18.67 - type: recall_at_3 value: 8.280999999999999 - type: recall_at_5 value: 10.023 task: type: Retrieval - dataset: config: default name: MTEB NQ (default) revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 split: test type: mteb/nq metrics: - type: main_score value: 43.91 - type: map_at_1 value: 22.675 - type: map_at_10 value: 36.403 - type: map_at_100 value: 37.714 - type: map_at_1000 value: 37.759 - type: map_at_20 value: 37.252 - type: map_at_3 value: 32.115 - type: map_at_5 value: 34.537 - type: mrr_at_1 value: 25.869061413673233 - type: mrr_at_10 value: 38.71765160293547 - type: mrr_at_100 value: 39.729686512365404 - type: mrr_at_1000 value: 39.764837578266224 - type: mrr_at_20 value: 39.37441098811741 - type: mrr_at_3 value: 35.04731556585558 - type: mrr_at_5 value: 37.1330629586714 - type: nauc_map_at_1000_diff1 value: 29.08399936916601 - type: nauc_map_at_1000_max value: 16.29226971170329 - type: nauc_map_at_1000_std value: -7.18828364236571 - type: nauc_map_at_100_diff1 value: 29.07598191479223 - type: nauc_map_at_100_max value: 16.313043083658002 - type: nauc_map_at_100_std value: -7.159235211023741 - type: nauc_map_at_10_diff1 value: 28.90215829867663 - type: nauc_map_at_10_max value: 15.780381654305788 - type: nauc_map_at_10_std value: -8.11712873780433 - type: nauc_map_at_1_diff1 value: 33.88307764782601 - type: nauc_map_at_1_max value: 13.654560605994789 - type: nauc_map_at_1_std value: -7.876960683195404 - type: nauc_map_at_20_diff1 value: 29.05304473680581 - type: nauc_map_at_20_max value: 16.24806438199264 - type: nauc_map_at_20_std value: -7.506230537647335 - type: nauc_map_at_3_diff1 value: 28.67483907357866 - type: nauc_map_at_3_max value: 14.10844684276125 - type: nauc_map_at_3_std value: -9.093065914867637 - type: nauc_map_at_5_diff1 value: 28.53223723787074 - type: nauc_map_at_5_max value: 15.10465507767692 - type: nauc_map_at_5_std value: -8.732885670313973 - type: nauc_mrr_at_1000_diff1 value: 29.291300363671773 - type: nauc_mrr_at_1000_max value: 16.988142920972564 - type: nauc_mrr_at_1000_std value: -5.75719276458362 - type: nauc_mrr_at_100_diff1 value: 29.283110074292846 - type: nauc_mrr_at_100_max value: 17.014791337479164 - type: nauc_mrr_at_100_std value: -5.723942677946949 - type: nauc_mrr_at_10_diff1 value: 29.034300179010796 - type: nauc_mrr_at_10_max value: 16.698355482721286 - type: nauc_mrr_at_10_std value: -6.298274321653915 - type: nauc_mrr_at_1_diff1 value: 34.02416083157839 - type: nauc_mrr_at_1_max value: 15.09009259703498 - type: nauc_mrr_at_1_std value: -6.412150592020908 - type: nauc_mrr_at_20_diff1 value: 29.246190262288547 - type: nauc_mrr_at_20_max value: 16.993567089527037 - type: nauc_mrr_at_20_std value: -5.952301904693604 - type: nauc_mrr_at_3_diff1 value: 29.252167190201583 - type: nauc_mrr_at_3_max value: 15.472766023941444 - type: nauc_mrr_at_3_std value: -7.080025242602122 - type: nauc_mrr_at_5_diff1 value: 28.753793317757832 - type: nauc_mrr_at_5_max value: 16.217731120360888 - type: nauc_mrr_at_5_std value: -6.784186528364365 - type: nauc_ndcg_at_1000_diff1 value: 28.37769622443293 - type: nauc_ndcg_at_1000_max value: 18.86280647857455 - type: nauc_ndcg_at_1000_std value: -4.117301819438475 - type: nauc_ndcg_at_100_diff1 value: 28.160781181149385 - type: nauc_ndcg_at_100_max value: 19.666949578583665 - type: nauc_ndcg_at_100_std value: -3.0099241628896127 - type: nauc_ndcg_at_10_diff1 value: 27.487943653391138 - type: nauc_ndcg_at_10_max value: 17.72148461204397 - type: nauc_ndcg_at_10_std value: -7.13147681057827 - type: nauc_ndcg_at_1_diff1 value: 34.02416083157839 - type: nauc_ndcg_at_1_max value: 15.09009259703498 - type: nauc_ndcg_at_1_std value: -6.412150592020908 - type: nauc_ndcg_at_20_diff1 value: 27.98913738838462 - type: nauc_ndcg_at_20_max value: 19.358472910765496 - type: nauc_ndcg_at_20_std value: -5.168140450720515 - type: nauc_ndcg_at_3_diff1 value: 27.421114192997827 - type: nauc_ndcg_at_3_max value: 14.573815202096219 - type: nauc_ndcg_at_3_std value: -9.02108020108124 - type: nauc_ndcg_at_5_diff1 value: 26.863276500871386 - type: nauc_ndcg_at_5_max value: 16.215661987458027 - type: nauc_ndcg_at_5_std value: -8.51151065053834 - type: nauc_precision_at_1000_diff1 value: -1.9585068692545722 - type: nauc_precision_at_1000_max value: 12.525664210853856 - type: nauc_precision_at_1000_std value: 18.708210958723804 - type: nauc_precision_at_100_diff1 value: 4.218996145429994 - type: nauc_precision_at_100_max value: 21.29629036792087 - type: nauc_precision_at_100_std value: 22.876761226300427 - type: nauc_precision_at_10_diff1 value: 14.912990806791418 - type: nauc_precision_at_10_max value: 20.15924358583 - type: nauc_precision_at_10_std value: 1.139621390169978 - type: nauc_precision_at_1_diff1 value: 34.02416083157839 - type: nauc_precision_at_1_max value: 15.09009259703498 - type: nauc_precision_at_1_std value: -6.412150592020908 - type: nauc_precision_at_20_diff1 value: 11.657378156434522 - type: nauc_precision_at_20_max value: 23.38664754164927 - type: nauc_precision_at_20_std value: 9.617305551800612 - type: nauc_precision_at_3_diff1 value: 20.65465729065584 - type: nauc_precision_at_3_max value: 15.503084569371712 - type: nauc_precision_at_3_std value: -7.20044261830195 - type: nauc_precision_at_5_diff1 value: 17.40716535145957 - type: nauc_precision_at_5_max value: 17.712445714665233 - type: nauc_precision_at_5_std value: -4.721781521873729 - type: nauc_recall_at_1000_diff1 value: 21.620801919444965 - type: nauc_recall_at_1000_max value: 70.1515764766629 - type: nauc_recall_at_1000_std value: 68.09307840072094 - type: nauc_recall_at_100_diff1 value: 21.56265782889045 - type: nauc_recall_at_100_max value: 49.39463091037063 - type: nauc_recall_at_100_std value: 39.090999181732265 - type: nauc_recall_at_10_diff1 value: 21.766222223998323 - type: nauc_recall_at_10_max value: 22.538222732899847 - type: nauc_recall_at_10_std value: -5.681237338134916 - type: nauc_recall_at_1_diff1 value: 33.88307764782601 - type: nauc_recall_at_1_max value: 13.654560605994789 - type: nauc_recall_at_1_std value: -7.876960683195404 - type: nauc_recall_at_20_diff1 value: 22.984671066381193 - type: nauc_recall_at_20_max value: 32.19082601358247 - type: nauc_recall_at_20_std value: 3.881886772570749 - type: nauc_recall_at_3_diff1 value: 22.784351240811095 - type: nauc_recall_at_3_max value: 14.199364117910083 - type: nauc_recall_at_3_std value: -10.359618653345851 - type: nauc_recall_at_5_diff1 value: 20.994385793302932 - type: nauc_recall_at_5_max value: 17.75611443293454 - type: nauc_recall_at_5_std value: -9.551812400313235 - type: ndcg_at_1 value: 25.868999999999996 - type: ndcg_at_10 value: 43.91 - type: ndcg_at_100 value: 49.533 - type: ndcg_at_1000 value: 50.589 - type: ndcg_at_20 value: 46.691 - type: ndcg_at_3 value: 35.681000000000004 - type: ndcg_at_5 value: 39.75 - type: precision_at_1 value: 25.868999999999996 - type: precision_at_10 value: 7.722999999999999 - type: precision_at_100 value: 1.083 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_20 value: 4.521 - type: precision_at_3 value: 16.773 - type: precision_at_5 value: 12.422 - type: recall_at_1 value: 22.675 - type: recall_at_10 value: 64.63600000000001 - type: recall_at_100 value: 89.253 - type: recall_at_1000 value: 97.103 - type: recall_at_20 value: 75.027 - type: recall_at_3 value: 43.079 - type: recall_at_5 value: 52.518 task: type: Retrieval - dataset: config: default name: MTEB QuoraRetrieval (default) revision: e4e08e0b7dbe3c8700f0daef558ff32256715259 split: dev type: mteb/quora metrics: - type: main_score value: 85.386 - type: map_at_1 value: 66.987 - type: map_at_10 value: 81.444 - type: map_at_100 value: 82.14 - type: map_at_1000 value: 82.157 - type: map_at_20 value: 81.932 - type: map_at_3 value: 78.484 - type: map_at_5 value: 80.274 - type: mrr_at_1 value: 77.44 - type: mrr_at_10 value: 84.2241825396826 - type: mrr_at_100 value: 84.40693808056355 - type: mrr_at_1000 value: 84.40988912757359 - type: mrr_at_20 value: 84.34978828558788 - type: mrr_at_3 value: 83.10333333333354 - type: mrr_at_5 value: 83.82533333333342 - type: nauc_map_at_1000_diff1 value: 71.91679334130635 - type: nauc_map_at_1000_max value: 31.103584943909812 - type: nauc_map_at_1000_std value: -40.456705674932074 - type: nauc_map_at_100_diff1 value: 71.91963348224603 - type: nauc_map_at_100_max value: 31.113011110459922 - type: nauc_map_at_100_std value: -40.502282566735715 - type: nauc_map_at_10_diff1 value: 72.12325712296365 - type: nauc_map_at_10_max value: 30.467237037792295 - type: nauc_map_at_10_std value: -41.94971797229005 - type: nauc_map_at_1_diff1 value: 76.18084680744592 - type: nauc_map_at_1_max value: 19.221792285260147 - type: nauc_map_at_1_std value: -36.62117815127342 - type: nauc_map_at_20_diff1 value: 72.00403582380048 - type: nauc_map_at_20_max value: 31.004347688742417 - type: nauc_map_at_20_std value: -41.060042626586416 - type: nauc_map_at_3_diff1 value: 72.93189642332118 - type: nauc_map_at_3_max value: 27.73892205168097 - type: nauc_map_at_3_std value: -43.30991939814638 - type: nauc_map_at_5_diff1 value: 72.5160537522699 - type: nauc_map_at_5_max value: 29.63685791438478 - type: nauc_map_at_5_std value: -42.96659156810372 - type: nauc_mrr_at_1000_diff1 value: 71.90147173095302 - type: nauc_mrr_at_1000_max value: 35.940317491851694 - type: nauc_mrr_at_1000_std value: -36.68579617983882 - type: nauc_mrr_at_100_diff1 value: 71.90072229115142 - type: nauc_mrr_at_100_max value: 35.94516255768936 - type: nauc_mrr_at_100_std value: -36.67856828426162 - type: nauc_mrr_at_10_diff1 value: 71.85801945991824 - type: nauc_mrr_at_10_max value: 36.07450549726053 - type: nauc_mrr_at_10_std value: -36.6574239418474 - type: nauc_mrr_at_1_diff1 value: 72.7487436455008 - type: nauc_mrr_at_1_max value: 33.4319386768096 - type: nauc_mrr_at_1_std value: -36.45664395168034 - type: nauc_mrr_at_20_diff1 value: 71.90639260981615 - type: nauc_mrr_at_20_max value: 36.016630372090525 - type: nauc_mrr_at_20_std value: -36.66806949361515 - type: nauc_mrr_at_3_diff1 value: 71.44580989174713 - type: nauc_mrr_at_3_max value: 36.14773805963233 - type: nauc_mrr_at_3_std value: -36.77858114551201 - type: nauc_mrr_at_5_diff1 value: 71.76720756731626 - type: nauc_mrr_at_5_max value: 36.2542204305357 - type: nauc_mrr_at_5_std value: -36.6104713553764 - type: nauc_ndcg_at_1000_diff1 value: 71.53171713574864 - type: nauc_ndcg_at_1000_max value: 33.46348829708273 - type: nauc_ndcg_at_1000_std value: -38.07080231429607 - type: nauc_ndcg_at_100_diff1 value: 71.51771411127783 - type: nauc_ndcg_at_100_max value: 33.606977743408365 - type: nauc_ndcg_at_100_std value: -38.095941435745075 - type: nauc_ndcg_at_10_diff1 value: 71.69089621879667 - type: nauc_ndcg_at_10_max value: 33.00069266017414 - type: nauc_ndcg_at_10_std value: -40.7188348379717 - type: nauc_ndcg_at_1_diff1 value: 72.79491888263016 - type: nauc_ndcg_at_1_max value: 33.29638988300579 - type: nauc_ndcg_at_1_std value: -36.447696477457235 - type: nauc_ndcg_at_20_diff1 value: 71.76982125411759 - type: nauc_ndcg_at_20_max value: 33.64375758413667 - type: nauc_ndcg_at_20_std value: -39.49026957812086 - type: nauc_ndcg_at_3_diff1 value: 71.16497890966858 - type: nauc_ndcg_at_3_max value: 32.07287318566269 - type: nauc_ndcg_at_3_std value: -41.48214270415671 - type: nauc_ndcg_at_5_diff1 value: 71.62531294064848 - type: nauc_ndcg_at_5_max value: 32.669489846127426 - type: nauc_ndcg_at_5_std value: -41.48868720768341 - type: nauc_precision_at_1000_diff1 value: -41.78128711551645 - type: nauc_precision_at_1000_max value: 2.438112824671157 - type: nauc_precision_at_1000_std value: 36.31304884788196 - type: nauc_precision_at_100_diff1 value: -40.88585036122386 - type: nauc_precision_at_100_max value: 4.549436618614055 - type: nauc_precision_at_100_std value: 34.22863218610554 - type: nauc_precision_at_10_diff1 value: -32.498791014783215 - type: nauc_precision_at_10_max value: 10.335105655353592 - type: nauc_precision_at_10_std value: 19.615568390029996 - type: nauc_precision_at_1_diff1 value: 72.79491888263016 - type: nauc_precision_at_1_max value: 33.29638988300579 - type: nauc_precision_at_1_std value: -36.447696477457235 - type: nauc_precision_at_20_diff1 value: -37.02913687382553 - type: nauc_precision_at_20_max value: 7.702625378474581 - type: nauc_precision_at_20_std value: 26.924503385144305 - type: nauc_precision_at_3_diff1 value: -10.70889509336384 - type: nauc_precision_at_3_max value: 19.01391790730139 - type: nauc_precision_at_3_std value: -0.3082223867032308 - type: nauc_precision_at_5_diff1 value: -23.455129536841024 - type: nauc_precision_at_5_max value: 15.200131768852115 - type: nauc_precision_at_5_std value: 10.043071356667568 - type: nauc_recall_at_1000_diff1 value: 53.062595262077814 - type: nauc_recall_at_1000_max value: -22.60327055724744 - type: nauc_recall_at_1000_std value: 30.926362639911186 - type: nauc_recall_at_100_diff1 value: 64.86049883288433 - type: nauc_recall_at_100_max value: 43.630795280243426 - type: nauc_recall_at_100_std value: -19.004039027440363 - type: nauc_recall_at_10_diff1 value: 67.6088540053325 - type: nauc_recall_at_10_max value: 32.59063448235274 - type: nauc_recall_at_10_std value: -48.53346063375749 - type: nauc_recall_at_1_diff1 value: 76.18084680744592 - type: nauc_recall_at_1_max value: 19.221792285260147 - type: nauc_recall_at_1_std value: -36.62117815127342 - type: nauc_recall_at_20_diff1 value: 69.9198659134458 - type: nauc_recall_at_20_max value: 39.37372830048965 - type: nauc_recall_at_20_std value: -45.5107664415436 - type: nauc_recall_at_3_diff1 value: 69.4918403464634 - type: nauc_recall_at_3_max value: 27.888269294676952 - type: nauc_recall_at_3_std value: -46.75391426769085 - type: nauc_recall_at_5_diff1 value: 68.69308264633088 - type: nauc_recall_at_5_max value: 30.788769684713763 - type: nauc_recall_at_5_std value: -48.20245507137716 - type: ndcg_at_1 value: 77.42 - type: ndcg_at_10 value: 85.386 - type: ndcg_at_100 value: 86.964 - type: ndcg_at_1000 value: 87.10799999999999 - type: ndcg_at_20 value: 86.259 - type: ndcg_at_3 value: 82.356 - type: ndcg_at_5 value: 83.91 - type: precision_at_1 value: 77.42 - type: precision_at_10 value: 13.064 - type: precision_at_100 value: 1.488 - type: precision_at_1000 value: 0.152 - type: precision_at_20 value: 6.978 - type: precision_at_3 value: 36.353 - type: precision_at_5 value: 23.888 - type: recall_at_1 value: 66.987 - type: recall_at_10 value: 93.487 - type: recall_at_100 value: 99.125 - type: recall_at_1000 value: 99.94 - type: recall_at_20 value: 96.263 - type: recall_at_3 value: 84.761 - type: recall_at_5 value: 89.242 task: type: Retrieval - dataset: config: default name: MTEB QuoraRetrieval (default) revision: e4e08e0b7dbe3c8700f0daef558ff32256715259 split: test type: mteb/quora metrics: - type: main_score value: 85.56099999999999 - type: map_at_1 value: 67.497 - type: map_at_10 value: 81.523 - type: map_at_100 value: 82.253 - type: map_at_1000 value: 82.269 - type: map_at_20 value: 82.0 - type: map_at_3 value: 78.39399999999999 - type: map_at_5 value: 80.352 - type: mrr_at_1 value: 77.97 - type: mrr_at_10 value: 84.6137420634918 - type: mrr_at_100 value: 84.7867074667826 - type: mrr_at_1000 value: 84.7886101354869 - type: mrr_at_20 value: 84.74068012792132 - type: mrr_at_3 value: 83.48833333333296 - type: mrr_at_5 value: 84.26183333333283 - type: nauc_map_at_1000_diff1 value: 72.75256203555736 - type: nauc_map_at_1000_max value: 32.109499410706924 - type: nauc_map_at_1000_std value: -43.48607664191692 - type: nauc_map_at_100_diff1 value: 72.7621590254899 - type: nauc_map_at_100_max value: 32.08666240896605 - type: nauc_map_at_100_std value: -43.52175837960735 - type: nauc_map_at_10_diff1 value: 73.0914502532422 - type: nauc_map_at_10_max value: 31.21935764927552 - type: nauc_map_at_10_std value: -45.596143090602155 - type: nauc_map_at_1_diff1 value: 77.6224306824682 - type: nauc_map_at_1_max value: 22.507118034491427 - type: nauc_map_at_1_std value: -40.33167676556522 - type: nauc_map_at_20_diff1 value: 72.8820293672954 - type: nauc_map_at_20_max value: 31.73360303879727 - type: nauc_map_at_20_std value: -44.454775928554376 - type: nauc_map_at_3_diff1 value: 74.10546474965095 - type: nauc_map_at_3_max value: 27.647987662942402 - type: nauc_map_at_3_std value: -47.234626583510355 - type: nauc_map_at_5_diff1 value: 73.52017296999442 - type: nauc_map_at_5_max value: 29.763316409372663 - type: nauc_map_at_5_std value: -46.85804426702457 - type: nauc_mrr_at_1000_diff1 value: 72.46174562552228 - type: nauc_mrr_at_1000_max value: 36.29693186997953 - type: nauc_mrr_at_1000_std value: -39.138708209808 - type: nauc_mrr_at_100_diff1 value: 72.46035839735728 - type: nauc_mrr_at_100_max value: 36.302723895121645 - type: nauc_mrr_at_100_std value: -39.1324046718197 - type: nauc_mrr_at_10_diff1 value: 72.36917674578362 - type: nauc_mrr_at_10_max value: 36.369854321798634 - type: nauc_mrr_at_10_std value: -39.236853924584764 - type: nauc_mrr_at_1_diff1 value: 74.17185246735727 - type: nauc_mrr_at_1_max value: 34.95837580050766 - type: nauc_mrr_at_1_std value: -37.80684342737821 - type: nauc_mrr_at_20_diff1 value: 72.44437909450644 - type: nauc_mrr_at_20_max value: 36.33224715875864 - type: nauc_mrr_at_20_std value: -39.169673628719565 - type: nauc_mrr_at_3_diff1 value: 72.22379259192769 - type: nauc_mrr_at_3_max value: 35.824553778653325 - type: nauc_mrr_at_3_std value: -39.79306459480282 - type: nauc_mrr_at_5_diff1 value: 72.2369691369586 - type: nauc_mrr_at_5_max value: 36.33784223941021 - type: nauc_mrr_at_5_std value: -39.36670062046302 - type: nauc_ndcg_at_1000_diff1 value: 72.01179014018193 - type: nauc_ndcg_at_1000_max value: 34.66610828967116 - type: nauc_ndcg_at_1000_std value: -40.79246584671935 - type: nauc_ndcg_at_100_diff1 value: 72.02571416492458 - type: nauc_ndcg_at_100_max value: 34.724662063977476 - type: nauc_ndcg_at_100_std value: -40.71853456489787 - type: nauc_ndcg_at_10_diff1 value: 72.05449454021301 - type: nauc_ndcg_at_10_max value: 33.65231664572072 - type: nauc_ndcg_at_10_std value: -44.387726473312185 - type: nauc_ndcg_at_1_diff1 value: 74.17185246735727 - type: nauc_ndcg_at_1_max value: 34.97004776008378 - type: nauc_ndcg_at_1_std value: -37.71249508747126 - type: nauc_ndcg_at_20_diff1 value: 72.09392270433015 - type: nauc_ndcg_at_20_max value: 34.04000107921998 - type: nauc_ndcg_at_20_std value: -42.86390076582997 - type: nauc_ndcg_at_3_diff1 value: 72.04472679866134 - type: nauc_ndcg_at_3_max value: 31.342820752545002 - type: nauc_ndcg_at_3_std value: -44.8034473982025 - type: nauc_ndcg_at_5_diff1 value: 72.01402290251924 - type: nauc_ndcg_at_5_max value: 32.582295878461906 - type: nauc_ndcg_at_5_std value: -45.23466775961186 - type: nauc_precision_at_1000_diff1 value: -43.708171172526 - type: nauc_precision_at_1000_max value: 4.162817192837958 - type: nauc_precision_at_1000_std value: 41.4385681190868 - type: nauc_precision_at_100_diff1 value: -43.108346163549854 - type: nauc_precision_at_100_max value: 4.67035667258918 - type: nauc_precision_at_100_std value: 41.46618060427868 - type: nauc_precision_at_10_diff1 value: -34.98729419918756 - type: nauc_precision_at_10_max value: 8.60311602976671 - type: nauc_precision_at_10_std value: 24.27173552773047 - type: nauc_precision_at_1_diff1 value: 74.17185246735727 - type: nauc_precision_at_1_max value: 34.97004776008378 - type: nauc_precision_at_1_std value: -37.71249508747126 - type: nauc_precision_at_20_diff1 value: -39.693045353877146 - type: nauc_precision_at_20_max value: 6.566610093825938 - type: nauc_precision_at_20_std value: 32.591669383633864 - type: nauc_precision_at_3_diff1 value: -10.84059740293422 - type: nauc_precision_at_3_max value: 14.778921667316514 - type: nauc_precision_at_3_std value: 0.20610132151386867 - type: nauc_precision_at_5_diff1 value: -25.776681106529015 - type: nauc_precision_at_5_max value: 11.422241065581462 - type: nauc_precision_at_5_std value: 13.113277134240322 - type: nauc_recall_at_1000_diff1 value: 63.99277654005502 - type: nauc_recall_at_1000_max value: 81.89809119086901 - type: nauc_recall_at_1000_std value: 49.009493075982604 - type: nauc_recall_at_100_diff1 value: 65.00747551591195 - type: nauc_recall_at_100_max value: 58.34661374064065 - type: nauc_recall_at_100_std value: -2.2251834665513215 - type: nauc_recall_at_10_diff1 value: 65.53778699047477 - type: nauc_recall_at_10_max value: 32.37899366515905 - type: nauc_recall_at_10_std value: -56.58653525784876 - type: nauc_recall_at_1_diff1 value: 77.6224306824682 - type: nauc_recall_at_1_max value: 22.507118034491427 - type: nauc_recall_at_1_std value: -40.33167676556522 - type: nauc_recall_at_20_diff1 value: 65.50582600247682 - type: nauc_recall_at_20_max value: 35.57972574345174 - type: nauc_recall_at_20_std value: -51.94704680030098 - type: nauc_recall_at_3_diff1 value: 70.09083256856663 - type: nauc_recall_at_3_max value: 24.1544101793869 - type: nauc_recall_at_3_std value: -53.465675345124055 - type: nauc_recall_at_5_diff1 value: 67.77056112320096 - type: nauc_recall_at_5_max value: 27.958593788155987 - type: nauc_recall_at_5_std value: -56.15384175590208 - type: ndcg_at_1 value: 77.97 - type: ndcg_at_10 value: 85.56099999999999 - type: ndcg_at_100 value: 87.14699999999999 - type: ndcg_at_1000 value: 87.274 - type: ndcg_at_20 value: 86.416 - type: ndcg_at_3 value: 82.349 - type: ndcg_at_5 value: 84.148 - type: precision_at_1 value: 77.97 - type: precision_at_10 value: 13.175 - type: precision_at_100 value: 1.5270000000000001 - type: precision_at_1000 value: 0.156 - type: precision_at_20 value: 7.037 - type: precision_at_3 value: 36.183 - type: precision_at_5 value: 24.01 - type: recall_at_1 value: 67.497 - type: recall_at_10 value: 93.452 - type: recall_at_100 value: 99.181 - type: recall_at_1000 value: 99.855 - type: recall_at_20 value: 96.31 - type: recall_at_3 value: 84.396 - type: recall_at_5 value: 89.298 task: type: Retrieval - dataset: config: default name: MTEB SCIDOCS (default) revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88 split: test type: mteb/scidocs metrics: - type: main_score value: 21.383 - type: map_at_1 value: 4.547 - type: map_at_10 value: 12.629000000000001 - type: map_at_100 value: 15.085999999999999 - type: map_at_1000 value: 15.478 - type: map_at_20 value: 13.806 - type: map_at_3 value: 8.620999999999999 - type: map_at_5 value: 10.465 - type: mrr_at_1 value: 22.3 - type: mrr_at_10 value: 34.701309523809535 - type: mrr_at_100 value: 35.731488574144606 - type: mrr_at_1000 value: 35.78105399278444 - type: mrr_at_20 value: 35.27448919286768 - type: mrr_at_3 value: 30.800000000000015 - type: mrr_at_5 value: 33.14999999999999 - type: nauc_map_at_1000_diff1 value: 8.343120963945573 - type: nauc_map_at_1000_max value: 33.75996610404545 - type: nauc_map_at_1000_std value: 21.75562874917409 - type: nauc_map_at_100_diff1 value: 8.384427363803946 - type: nauc_map_at_100_max value: 33.79077477489537 - type: nauc_map_at_100_std value: 21.686104124370104 - type: nauc_map_at_10_diff1 value: 9.234064591129155 - type: nauc_map_at_10_max value: 32.0489964068748 - type: nauc_map_at_10_std value: 17.763385123837153 - type: nauc_map_at_1_diff1 value: 15.665356237945588 - type: nauc_map_at_1_max value: 26.57885889682774 - type: nauc_map_at_1_std value: 13.193930049941363 - type: nauc_map_at_20_diff1 value: 8.780121965824657 - type: nauc_map_at_20_max value: 32.869986461841606 - type: nauc_map_at_20_std value: 19.467991991140142 - type: nauc_map_at_3_diff1 value: 11.035561869808728 - type: nauc_map_at_3_max value: 27.582026721813047 - type: nauc_map_at_3_std value: 13.036167718922828 - type: nauc_map_at_5_diff1 value: 10.080603205194704 - type: nauc_map_at_5_max value: 30.044299131641584 - type: nauc_map_at_5_std value: 15.350543387048896 - type: nauc_mrr_at_1000_diff1 value: 11.840089363851067 - type: nauc_mrr_at_1000_max value: 29.12864266402923 - type: nauc_mrr_at_1000_std value: 17.228534279486695 - type: nauc_mrr_at_100_diff1 value: 11.809506621783779 - type: nauc_mrr_at_100_max value: 29.160886200309044 - type: nauc_mrr_at_100_std value: 17.282962681247795 - type: nauc_mrr_at_10_diff1 value: 11.935445560209875 - type: nauc_mrr_at_10_max value: 29.30219333248834 - type: nauc_mrr_at_10_std value: 16.987513421652007 - type: nauc_mrr_at_1_diff1 value: 16.006372448277954 - type: nauc_mrr_at_1_max value: 26.835362452310946 - type: nauc_mrr_at_1_std value: 13.253888908633249 - type: nauc_mrr_at_20_diff1 value: 11.859112690530099 - type: nauc_mrr_at_20_max value: 29.23212198991369 - type: nauc_mrr_at_20_std value: 17.371690243627583 - type: nauc_mrr_at_3_diff1 value: 11.942072558497351 - type: nauc_mrr_at_3_max value: 27.518198076910377 - type: nauc_mrr_at_3_std value: 15.553451577229513 - type: nauc_mrr_at_5_diff1 value: 11.844643930313167 - type: nauc_mrr_at_5_max value: 28.507660302183385 - type: nauc_mrr_at_5_std value: 16.407092796978915 - type: nauc_ndcg_at_1000_diff1 value: 7.566941588567799 - type: nauc_ndcg_at_1000_max value: 35.66190450829144 - type: nauc_ndcg_at_1000_std value: 27.644208320073655 - type: nauc_ndcg_at_100_diff1 value: 7.789806046298824 - type: nauc_ndcg_at_100_max value: 36.44803836644228 - type: nauc_ndcg_at_100_std value: 28.570220616023008 - type: nauc_ndcg_at_10_diff1 value: 9.112031867544205 - type: nauc_ndcg_at_10_max value: 33.29203357424967 - type: nauc_ndcg_at_10_std value: 19.717835519047455 - type: nauc_ndcg_at_1_diff1 value: 16.006372448277954 - type: nauc_ndcg_at_1_max value: 26.835362452310946 - type: nauc_ndcg_at_1_std value: 13.253888908633249 - type: nauc_ndcg_at_20_diff1 value: 8.512820930810134 - type: nauc_ndcg_at_20_max value: 34.04402394206212 - type: nauc_ndcg_at_20_std value: 22.450723493452905 - type: nauc_ndcg_at_3_diff1 value: 10.666228630298292 - type: nauc_ndcg_at_3_max value: 27.790265132571 - type: nauc_ndcg_at_3_std value: 14.549017414623137 - type: nauc_ndcg_at_5_diff1 value: 9.689838800282292 - type: nauc_ndcg_at_5_max value: 30.200901003139375 - type: nauc_ndcg_at_5_std value: 16.756955996011495 - type: nauc_precision_at_1000_diff1 value: -1.7688373117252636 - type: nauc_precision_at_1000_max value: 28.14939872063652 - type: nauc_precision_at_1000_std value: 36.11781984236825 - type: nauc_precision_at_100_diff1 value: 2.6982654390147434 - type: nauc_precision_at_100_max value: 34.95302652738801 - type: nauc_precision_at_100_std value: 38.612855789126435 - type: nauc_precision_at_10_diff1 value: 6.3582141835613575 - type: nauc_precision_at_10_max value: 34.35036042979775 - type: nauc_precision_at_10_std value: 22.37284626453628 - type: nauc_precision_at_1_diff1 value: 16.006372448277954 - type: nauc_precision_at_1_max value: 26.835362452310946 - type: nauc_precision_at_1_std value: 13.253888908633249 - type: nauc_precision_at_20_diff1 value: 4.929659714506811 - type: nauc_precision_at_20_max value: 33.63912155282006 - type: nauc_precision_at_20_std value: 26.48159430953102 - type: nauc_precision_at_3_diff1 value: 8.684533707455072 - type: nauc_precision_at_3_max value: 27.201795527705 - type: nauc_precision_at_3_std value: 14.702968668907863 - type: nauc_precision_at_5_diff1 value: 6.996882444408692 - type: nauc_precision_at_5_max value: 30.190181104191716 - type: nauc_precision_at_5_std value: 17.9773294217075 - type: nauc_recall_at_1000_diff1 value: -2.081286806432597 - type: nauc_recall_at_1000_max value: 28.92540441778861 - type: nauc_recall_at_1000_std value: 38.288436893164594 - type: nauc_recall_at_100_diff1 value: 2.587296034481601 - type: nauc_recall_at_100_max value: 35.29377052645708 - type: nauc_recall_at_100_std value: 39.05821703702423 - type: nauc_recall_at_10_diff1 value: 6.316338189514334 - type: nauc_recall_at_10_max value: 34.11763962208226 - type: nauc_recall_at_10_std value: 22.195626530130237 - type: nauc_recall_at_1_diff1 value: 15.665356237945588 - type: nauc_recall_at_1_max value: 26.57885889682774 - type: nauc_recall_at_1_std value: 13.193930049941363 - type: nauc_recall_at_20_diff1 value: 4.830004620366135 - type: nauc_recall_at_20_max value: 33.408749981759904 - type: nauc_recall_at_20_std value: 26.32129089751909 - type: nauc_recall_at_3_diff1 value: 8.682897494798821 - type: nauc_recall_at_3_max value: 26.920725011561657 - type: nauc_recall_at_3_std value: 14.373627083919555 - type: nauc_recall_at_5_diff1 value: 7.078349615166568 - type: nauc_recall_at_5_max value: 29.938684723317483 - type: nauc_recall_at_5_std value: 17.610677580599983 - type: ndcg_at_1 value: 22.3 - type: ndcg_at_10 value: 21.383 - type: ndcg_at_100 value: 30.686999999999998 - type: ndcg_at_1000 value: 36.806 - type: ndcg_at_20 value: 24.38 - type: ndcg_at_3 value: 19.262999999999998 - type: ndcg_at_5 value: 17.226 - type: precision_at_1 value: 22.3 - type: precision_at_10 value: 11.53 - type: precision_at_100 value: 2.5170000000000003 - type: precision_at_1000 value: 0.39699999999999996 - type: precision_at_20 value: 7.495 - type: precision_at_3 value: 18.3 - type: precision_at_5 value: 15.479999999999999 - type: recall_at_1 value: 4.547 - type: recall_at_10 value: 23.357 - type: recall_at_100 value: 51.047 - type: recall_at_1000 value: 80.63 - type: recall_at_20 value: 30.392999999999997 - type: recall_at_3 value: 11.161999999999999 - type: recall_at_5 value: 15.702 task: type: Retrieval - dataset: config: default name: MTEB SICK-R (default) revision: 20a6d6f312dd54037fe07a32d58e5e168867909d split: test type: mteb/sickr-sts metrics: - type: cosine_pearson value: 82.89761522599564 - type: cosine_spearman value: 79.61239240879378 - type: euclidean_pearson value: 80.47994478833725 - type: euclidean_spearman value: 79.61239321441792 - type: main_score value: 79.61239240879378 - type: manhattan_pearson value: 80.28255784608595 - type: manhattan_spearman value: 79.40840813688955 - type: pearson value: 82.89761555899388 - type: spearman value: 79.6123884921359 task: type: STS - dataset: config: default name: MTEB STS12 (default) revision: a0d554a64d88156834ff5ae9920b964011b16384 split: test type: mteb/sts12-sts metrics: - type: cosine_pearson value: 80.60566114110073 - type: cosine_spearman value: 70.13320748146175 - type: euclidean_pearson value: 77.57067368654582 - type: euclidean_spearman value: 70.13314252571948 - type: main_score value: 70.13320748146175 - type: manhattan_pearson value: 77.29676392658962 - type: manhattan_spearman value: 69.88321444641842 - type: pearson value: 80.60566179532725 - type: spearman value: 70.13415482313484 task: type: STS - dataset: config: default name: MTEB STS13 (default) revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca split: test type: mteb/sts13-sts metrics: - type: cosine_pearson value: 78.320437342309 - type: cosine_spearman value: 80.74355074851618 - type: euclidean_pearson value: 80.17187362623575 - type: euclidean_spearman value: 80.7435506767441 - type: main_score value: 80.74355074851618 - type: manhattan_pearson value: 79.88874001476627 - type: manhattan_spearman value: 80.45868266096211 - type: pearson value: 78.32043820850613 - type: spearman value: 80.7435506767441 task: type: STS - dataset: config: default name: MTEB STS14 (default) revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 split: test type: mteb/sts14-sts metrics: - type: cosine_pearson value: 79.70921389242768 - type: cosine_spearman value: 76.24023668566399 - type: euclidean_pearson value: 79.46251371995523 - type: euclidean_spearman value: 76.24023301664893 - type: main_score value: 76.24023668566399 - type: manhattan_pearson value: 79.20065381375537 - type: manhattan_spearman value: 76.00256203342937 - type: pearson value: 79.70921366493712 - type: spearman value: 76.24024123334894 task: type: STS - dataset: config: default name: MTEB STS15 (default) revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 split: test type: mteb/sts15-sts metrics: - type: cosine_pearson value: 82.52746024419504 - type: cosine_spearman value: 84.11623590647757 - type: euclidean_pearson value: 83.48562697051246 - type: euclidean_spearman value: 84.11623814887781 - type: main_score value: 84.11623590647757 - type: manhattan_pearson value: 83.31205342795175 - type: manhattan_spearman value: 83.94265353931766 - type: pearson value: 82.5274599058833 - type: spearman value: 84.11622788808319 task: type: STS - dataset: config: default name: MTEB STS16 (default) revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 split: test type: mteb/sts16-sts metrics: - type: cosine_pearson value: 74.82265172874041 - type: cosine_spearman value: 78.4960329393101 - type: euclidean_pearson value: 78.22716476239002 - type: euclidean_spearman value: 78.4960329393101 - type: main_score value: 78.4960329393101 - type: manhattan_pearson value: 78.05885015944702 - type: manhattan_spearman value: 78.3249847702297 - type: pearson value: 74.8226511407601 - type: spearman value: 78.49603251582666 task: type: STS - dataset: config: en-ar name: MTEB STS17 (en-ar) revision: faeb762787bd10488a50c8b5be4a3b82e411949c split: test type: mteb/sts17-crosslingual-sts metrics: - type: cosine_pearson value: 9.087713281584145 - type: cosine_spearman value: 9.1279366990081 - type: euclidean_pearson value: 9.117969623657645 - type: euclidean_spearman value: 9.1279366990081 - type: main_score value: 9.1279366990081 - type: manhattan_pearson value: 6.440301458166155 - type: manhattan_spearman value: 6.5264401438882 - type: pearson value: 9.087716717716269 - type: spearman value: 9.1279366990081 task: type: STS - dataset: config: ko-ko name: MTEB STS17 (ko-ko) revision: faeb762787bd10488a50c8b5be4a3b82e411949c split: test type: mteb/sts17-crosslingual-sts metrics: - type: cosine_pearson value: 21.931422026594692 - type: cosine_spearman value: 32.79923712532921 - type: euclidean_pearson value: 28.93227440763988 - type: euclidean_spearman value: 32.79905675768862 - type: main_score value: 32.79923712532921 - type: manhattan_pearson value: 29.05113424705813 - type: manhattan_spearman value: 32.79564881007963 - type: pearson value: 21.931423583103467 - type: spearman value: 32.798301050425714 task: type: STS - dataset: config: en-de name: MTEB STS17 (en-de) revision: faeb762787bd10488a50c8b5be4a3b82e411949c split: test type: mteb/sts17-crosslingual-sts metrics: - type: cosine_pearson value: 30.28151988475531 - type: cosine_spearman value: 28.038768547398902 - type: euclidean_pearson value: 30.498915267658234 - type: euclidean_spearman value: 28.038768547398902 - type: main_score value: 28.038768547398902 - type: manhattan_pearson value: 30.794292054714152 - type: manhattan_spearman value: 28.56988926201773 - type: pearson value: 30.281515394951132 - type: spearman value: 28.038768547398902 task: type: STS - dataset: config: en-tr name: MTEB STS17 (en-tr) revision: faeb762787bd10488a50c8b5be4a3b82e411949c split: test type: mteb/sts17-crosslingual-sts metrics: - type: cosine_pearson value: 1.103529546877178 - type: cosine_spearman value: -4.344212733215535 - type: euclidean_pearson value: 1.4727741337049667 - type: euclidean_spearman value: -4.344212733215535 - type: main_score value: -4.344212733215535 - type: manhattan_pearson value: 3.066202915700401 - type: manhattan_spearman value: -1.8750642080759847 - type: pearson value: 1.1035283349143992 - type: spearman value: -4.344212733215535 task: type: STS - dataset: config: ar-ar name: MTEB STS17 (ar-ar) revision: faeb762787bd10488a50c8b5be4a3b82e411949c split: test type: mteb/sts17-crosslingual-sts metrics: - type: cosine_pearson value: 34.20639223082103 - type: cosine_spearman value: 50.099745536749865 - type: euclidean_pearson value: 46.672397550772246 - type: euclidean_spearman value: 50.099745536749865 - type: main_score value: 50.099745536749865 - type: manhattan_pearson value: 46.27511641947692 - type: manhattan_spearman value: 49.773911789667046 - type: pearson value: 34.20638672669267 - type: spearman value: 50.092555073567134 task: type: STS - dataset: config: it-en name: MTEB STS17 (it-en) revision: faeb762787bd10488a50c8b5be4a3b82e411949c split: test type: mteb/sts17-crosslingual-sts metrics: - type: cosine_pearson value: 28.724311230307375 - type: cosine_spearman value: 28.250724805661058 - type: euclidean_pearson value: 28.315938847964397 - type: euclidean_spearman value: 28.250724805661058 - type: main_score value: 28.250724805661058 - type: manhattan_pearson value: 27.674391866744763 - type: manhattan_spearman value: 26.340658343374766 - type: pearson value: 28.724300976547458 - type: spearman value: 28.250724805661058 task: type: STS - dataset: config: es-es name: MTEB STS17 (es-es) revision: faeb762787bd10488a50c8b5be4a3b82e411949c split: test type: mteb/sts17-crosslingual-sts metrics: - type: cosine_pearson value: 68.60948138294124 - type: cosine_spearman value: 71.2644385447136 - type: euclidean_pearson value: 73.24322173392738 - type: euclidean_spearman value: 71.2644385447136 - type: main_score value: 71.2644385447136 - type: manhattan_pearson value: 72.77369507880161 - type: manhattan_spearman value: 70.79274454462727 - type: pearson value: 68.6094780579963 - type: spearman value: 71.26563420541223 task: type: STS - dataset: config: en-en name: MTEB STS17 (en-en) revision: faeb762787bd10488a50c8b5be4a3b82e411949c split: test type: mteb/sts17-crosslingual-sts metrics: - type: cosine_pearson value: 89.55726457989037 - type: cosine_spearman value: 89.80253006361272 - type: euclidean_pearson value: 89.52662858022953 - type: euclidean_spearman value: 89.80253006361272 - type: main_score value: 89.80253006361272 - type: manhattan_pearson value: 89.515204742883 - type: manhattan_spearman value: 89.85327040835446 - type: pearson value: 89.5572652518414 - type: spearman value: 89.80253006361272 task: type: STS - dataset: config: fr-en name: MTEB STS17 (fr-en) revision: faeb762787bd10488a50c8b5be4a3b82e411949c split: test type: mteb/sts17-crosslingual-sts metrics: - type: cosine_pearson value: 35.25506119586307 - type: cosine_spearman value: 34.7509473636312 - type: euclidean_pearson value: 34.966513615613614 - type: euclidean_spearman value: 34.7509473636312 - type: main_score value: 34.7509473636312 - type: manhattan_pearson value: 34.205440386931265 - type: manhattan_spearman value: 34.19580186457024 - type: pearson value: 35.255050728265005 - type: spearman value: 34.7509473636312 task: type: STS - dataset: config: nl-en name: MTEB STS17 (nl-en) revision: faeb762787bd10488a50c8b5be4a3b82e411949c split: test type: mteb/sts17-crosslingual-sts metrics: - type: cosine_pearson value: 29.15956073846505 - type: cosine_spearman value: 23.4150546328063 - type: euclidean_pearson value: 29.36385636868059 - type: euclidean_spearman value: 23.4150546328063 - type: main_score value: 23.4150546328063 - type: manhattan_pearson value: 27.709164754459888 - type: manhattan_spearman value: 21.483654161935416 - type: pearson value: 29.159552605316065 - type: spearman value: 23.4150546328063 task: type: STS - dataset: config: es-en name: MTEB STS17 (es-en) revision: faeb762787bd10488a50c8b5be4a3b82e411949c split: test type: mteb/sts17-crosslingual-sts metrics: - type: cosine_pearson value: 16.614254117804347 - type: cosine_spearman value: 17.838933806777177 - type: euclidean_pearson value: 16.374166189982727 - type: euclidean_spearman value: 17.838933806777177 - type: main_score value: 17.838933806777177 - type: manhattan_pearson value: 17.061405031678706 - type: manhattan_spearman value: 17.720413541179628 - type: pearson value: 16.61424897880095 - type: spearman value: 17.838933806777177 task: type: STS - dataset: config: es-it name: MTEB STS22 (es-it) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_pearson value: 33.923866543364184 - type: cosine_spearman value: 28.270091354677568 - type: euclidean_pearson value: 32.54215682097072 - type: euclidean_spearman value: 28.270091354677568 - type: main_score value: 28.270091354677568 - type: manhattan_pearson value: 32.330367232207976 - type: manhattan_spearman value: 28.255815608210906 - type: pearson value: 33.923868062160935 - type: spearman value: 28.270091354677568 task: type: STS - dataset: config: ar name: MTEB STS22 (ar) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_pearson value: 33.485734155602124 - type: cosine_spearman value: 35.184256236066545 - type: euclidean_pearson value: 35.72218023358308 - type: euclidean_spearman value: 35.184256236066545 - type: main_score value: 35.184256236066545 - type: manhattan_pearson value: 35.6410009964212 - type: manhattan_spearman value: 35.234893627175516 - type: pearson value: 33.485735809605345 - type: spearman value: 35.184256236066545 task: type: STS - dataset: config: de name: MTEB STS22 (de) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_pearson value: 9.234564624165541 - type: cosine_spearman value: 9.26135671318589 - type: euclidean_pearson value: 5.846102863095134 - type: euclidean_spearman value: 9.26135671318589 - type: main_score value: 9.26135671318589 - type: manhattan_pearson value: 5.7129856416500635 - type: manhattan_spearman value: 9.101317893245328 - type: pearson value: 9.23456334729063 - type: spearman value: 9.253696154102018 task: type: STS - dataset: config: de-pl name: MTEB STS22 (de-pl) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_pearson value: 7.541720515534752 - type: cosine_spearman value: 16.228492544303737 - type: euclidean_pearson value: 8.688753048966332 - type: euclidean_spearman value: 16.228492544303737 - type: main_score value: 16.228492544303737 - type: manhattan_pearson value: 8.82151603264942 - type: manhattan_spearman value: 14.072905299329804 - type: pearson value: 7.5417136414683394 - type: spearman value: 16.228492544303737 task: type: STS - dataset: config: zh name: MTEB STS22 (zh) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_pearson value: 14.272880349051686 - type: cosine_spearman value: 26.923117280473786 - type: euclidean_pearson value: 21.452060315735473 - type: euclidean_spearman value: 26.923117280473786 - type: main_score value: 26.923117280473786 - type: manhattan_pearson value: 21.100522438025678 - type: manhattan_spearman value: 26.877576505273343 - type: pearson value: 14.272881506036967 - type: spearman value: 26.92266584563422 task: type: STS - dataset: config: de-en name: MTEB STS22 (de-en) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_pearson value: 43.299601499947705 - type: cosine_spearman value: 44.906191184092506 - type: euclidean_pearson value: 44.27229004756392 - type: euclidean_spearman value: 44.906191184092506 - type: main_score value: 44.906191184092506 - type: manhattan_pearson value: 43.30226127948028 - type: manhattan_spearman value: 44.15123742556422 - type: pearson value: 43.29959849491429 - type: spearman value: 44.906191184092506 task: type: STS - dataset: config: zh-en name: MTEB STS22 (zh-en) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_pearson value: 38.208833040701016 - type: cosine_spearman value: 40.35513412565045 - type: euclidean_pearson value: 37.69713615937953 - type: euclidean_spearman value: 40.35513412565045 - type: main_score value: 40.35513412565045 - type: manhattan_pearson value: 38.07295736531383 - type: manhattan_spearman value: 41.18949838024728 - type: pearson value: 38.20882899305228 - type: spearman value: 40.35513412565045 task: type: STS - dataset: config: fr name: MTEB STS22 (fr) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_pearson value: 57.64495357984024 - type: cosine_spearman value: 58.654277042743985 - type: euclidean_pearson value: 57.6918916487343 - type: euclidean_spearman value: 58.654277042743985 - type: main_score value: 58.654277042743985 - type: manhattan_pearson value: 57.44807489792652 - type: manhattan_spearman value: 58.55625686584207 - type: pearson value: 57.64495326981972 - type: spearman value: 58.654277042743985 task: type: STS - dataset: config: tr name: MTEB STS22 (tr) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_pearson value: 11.450692016022549 - type: cosine_spearman value: 17.51115938663503 - type: euclidean_pearson value: 15.02321459776712 - type: euclidean_spearman value: 17.51115938663503 - type: main_score value: 17.51115938663503 - type: manhattan_pearson value: 14.927559119791256 - type: manhattan_spearman value: 17.573688652181342 - type: pearson value: 11.450690790822087 - type: spearman value: 17.51115938663503 task: type: STS - dataset: config: pl-en name: MTEB STS22 (pl-en) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_pearson value: 43.813735649943744 - type: cosine_spearman value: 43.21656383381947 - type: euclidean_pearson value: 44.23609795164775 - type: euclidean_spearman value: 43.21656383381947 - type: main_score value: 43.21656383381947 - type: manhattan_pearson value: 45.15245284513481 - type: manhattan_spearman value: 44.617872376647036 - type: pearson value: 43.81373603921556 - type: spearman value: 43.21656383381947 task: type: STS - dataset: config: pl name: MTEB STS22 (pl) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_pearson value: 2.064593579711082 - type: cosine_spearman value: 13.671807943753796 - type: euclidean_pearson value: -0.2014491570823139 - type: euclidean_spearman value: 13.671807943753796 - type: main_score value: 13.671807943753796 - type: manhattan_pearson value: -0.47928238790519784 - type: manhattan_spearman value: 13.539652287241891 - type: pearson value: 2.064597029930494 - type: spearman value: 13.877866791327195 task: type: STS - dataset: config: it name: MTEB STS22 (it) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_pearson value: 34.34600586953367 - type: cosine_spearman value: 35.333432813362556 - type: euclidean_pearson value: 35.55800799947818 - type: euclidean_spearman value: 35.333432813362556 - type: main_score value: 35.333432813362556 - type: manhattan_pearson value: 35.52036355368523 - type: manhattan_spearman value: 35.32333999151806 - type: pearson value: 34.34600393440248 - type: spearman value: 35.333432813362556 task: type: STS - dataset: config: de-fr name: MTEB STS22 (de-fr) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_pearson value: 34.38468852241422 - type: cosine_spearman value: 37.517923206663426 - type: euclidean_pearson value: 36.15747644065988 - type: euclidean_spearman value: 37.517923206663426 - type: main_score value: 37.517923206663426 - type: manhattan_pearson value: 36.367891064808674 - type: manhattan_spearman value: 36.98092141912518 - type: pearson value: 34.3846838424612 - type: spearman value: 37.517923206663426 task: type: STS - dataset: config: en name: MTEB STS22 (en) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_pearson value: 62.10106220976315 - type: cosine_spearman value: 62.162497286103665 - type: euclidean_pearson value: 63.47223854069964 - type: euclidean_spearman value: 62.162497286103665 - type: main_score value: 62.162497286103665 - type: manhattan_pearson value: 63.20260003333065 - type: manhattan_spearman value: 61.91516380449414 - type: pearson value: 62.10106168936663 - type: spearman value: 62.162497286103665 task: type: STS - dataset: config: es-en name: MTEB STS22 (es-en) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_pearson value: 43.060498443703956 - type: cosine_spearman value: 46.96962646942495 - type: euclidean_pearson value: 44.32895696900264 - type: euclidean_spearman value: 46.96962646942495 - type: main_score value: 46.96962646942495 - type: manhattan_pearson value: 44.35969054771841 - type: manhattan_spearman value: 47.06500736586804 - type: pearson value: 43.060495996175085 - type: spearman value: 46.96962646942495 task: type: STS - dataset: config: es name: MTEB STS22 (es) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_pearson value: 31.552876706984677 - type: cosine_spearman value: 36.385581860846955 - type: euclidean_pearson value: 35.09094631898299 - type: euclidean_spearman value: 36.385581860846955 - type: main_score value: 36.385581860846955 - type: manhattan_pearson value: 34.90394958111092 - type: manhattan_spearman value: 36.31134774911119 - type: pearson value: 31.552876690362574 - type: spearman value: 36.385581860846955 task: type: STS - dataset: config: fr-pl name: MTEB STS22 (fr-pl) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_pearson value: 69.17422308603082 - type: cosine_spearman value: 61.97797868009122 - type: euclidean_pearson value: 68.87185772300731 - type: euclidean_spearman value: 61.97797868009122 - type: main_score value: 61.97797868009122 - type: manhattan_pearson value: 69.95446576863942 - type: manhattan_spearman value: 61.97797868009122 - type: pearson value: 69.17420713169005 - type: spearman value: 61.97797868009122 task: type: STS - dataset: config: ru name: MTEB STS22 (ru) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_pearson value: 7.810349558361704 - type: cosine_spearman value: 13.082192851727692 - type: euclidean_pearson value: 9.256112073506458 - type: euclidean_spearman value: 13.082192851727692 - type: main_score value: 13.082192851727692 - type: manhattan_pearson value: 8.72571445455125 - type: manhattan_spearman value: 12.72837879957086 - type: pearson value: 7.8103471976464665 - type: spearman value: 13.082192851727692 task: type: STS - dataset: config: default name: MTEB STSBenchmark (default) revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 split: test type: mteb/stsbenchmark-sts metrics: - type: cosine_pearson value: 81.60094971873578 - type: cosine_spearman value: 82.21497157084686 - type: euclidean_pearson value: 82.7649580865935 - type: euclidean_spearman value: 82.21487652663633 - type: main_score value: 82.21497157084686 - type: manhattan_pearson value: 82.60879748854263 - type: manhattan_spearman value: 82.02680374744259 - type: pearson value: 81.60094958106916 - type: spearman value: 82.21497157084686 task: type: STS - dataset: config: default name: MTEB SciDocsRR (default) revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab split: test type: mteb/scidocs-reranking metrics: - type: main_score value: 88.0620858216092 - type: map value: 88.0620858216092 - type: mrr value: 96.56171442936149 - type: nAUC_map_diff1 value: -10.48187303124293 - type: nAUC_map_max value: 56.61425009792597 - type: nAUC_map_std value: 71.46629366886685 - type: nAUC_mrr_diff1 value: 32.581743266040704 - type: nAUC_mrr_max value: 88.42416536858423 - type: nAUC_mrr_std value: 83.97838942252417 task: type: Reranking - dataset: config: default name: MTEB SciFact (default) revision: 0228b52cf27578f30900b9e5271d331663a030d7 split: test type: mteb/scifact metrics: - type: main_score value: 60.521 - type: map_at_1 value: 44.622 - type: map_at_10 value: 55.033 - type: map_at_100 value: 55.659000000000006 - type: map_at_1000 value: 55.701 - type: map_at_20 value: 55.379999999999995 - type: map_at_3 value: 51.918 - type: map_at_5 value: 53.563 - type: mrr_at_1 value: 46.666666666666664 - type: mrr_at_10 value: 56.386640211640184 - type: mrr_at_100 value: 56.86324391737727 - type: mrr_at_1000 value: 56.899770911998424 - type: mrr_at_20 value: 56.63502866958745 - type: mrr_at_3 value: 53.88888888888887 - type: mrr_at_5 value: 55.388888888888886 - type: nauc_map_at_1000_diff1 value: 60.51522831114532 - type: nauc_map_at_1000_max value: 38.729857044583156 - type: nauc_map_at_1000_std value: 11.893619390095981 - type: nauc_map_at_100_diff1 value: 60.49224806809106 - type: nauc_map_at_100_max value: 38.757391171919394 - type: nauc_map_at_100_std value: 11.904869622843274 - type: nauc_map_at_10_diff1 value: 60.59184266027688 - type: nauc_map_at_10_max value: 38.814372515682855 - type: nauc_map_at_10_std value: 12.062834601122749 - type: nauc_map_at_1_diff1 value: 65.53217534037644 - type: nauc_map_at_1_max value: 28.820516516628203 - type: nauc_map_at_1_std value: 0.31747544825532187 - type: nauc_map_at_20_diff1 value: 60.49567580134993 - type: nauc_map_at_20_max value: 38.75172270044811 - type: nauc_map_at_20_std value: 12.03257481376082 - type: nauc_map_at_3_diff1 value: 61.95672662726325 - type: nauc_map_at_3_max value: 36.21518511557246 - type: nauc_map_at_3_std value: 8.725184216964175 - type: nauc_map_at_5_diff1 value: 61.59116859538641 - type: nauc_map_at_5_max value: 37.099329579199555 - type: nauc_map_at_5_std value: 9.77500772774504 - type: nauc_mrr_at_1000_diff1 value: 59.42105746719931 - type: nauc_mrr_at_1000_max value: 40.45270689627016 - type: nauc_mrr_at_1000_std value: 14.875759703157328 - type: nauc_mrr_at_100_diff1 value: 59.39676827842953 - type: nauc_mrr_at_100_max value: 40.471355257224204 - type: nauc_mrr_at_100_std value: 14.869956496055684 - type: nauc_mrr_at_10_diff1 value: 59.259253351613026 - type: nauc_mrr_at_10_max value: 40.51223230352826 - type: nauc_mrr_at_10_std value: 15.184710455010213 - type: nauc_mrr_at_1_diff1 value: 64.15225217314949 - type: nauc_mrr_at_1_max value: 32.78250943893229 - type: nauc_mrr_at_1_std value: 6.2994995170778845 - type: nauc_mrr_at_20_diff1 value: 59.353790297025796 - type: nauc_mrr_at_20_max value: 40.50201824596607 - type: nauc_mrr_at_20_std value: 15.006822621780044 - type: nauc_mrr_at_3_diff1 value: 60.38367084336967 - type: nauc_mrr_at_3_max value: 39.80345696060504 - type: nauc_mrr_at_3_std value: 14.692239492205003 - type: nauc_mrr_at_5_diff1 value: 59.88907677461462 - type: nauc_mrr_at_5_max value: 40.45929072557247 - type: nauc_mrr_at_5_std value: 14.716015778108465 - type: nauc_ndcg_at_1000_diff1 value: 57.912136336340005 - type: nauc_ndcg_at_1000_max value: 41.805732022231986 - type: nauc_ndcg_at_1000_std value: 16.26204553755279 - type: nauc_ndcg_at_100_diff1 value: 57.20651790458488 - type: nauc_ndcg_at_100_max value: 42.35739596941831 - type: nauc_ndcg_at_100_std value: 16.634877514141273 - type: nauc_ndcg_at_10_diff1 value: 57.21191994351011 - type: nauc_ndcg_at_10_max value: 42.90550640696327 - type: nauc_ndcg_at_10_std value: 17.95798580533489 - type: nauc_ndcg_at_1_diff1 value: 64.15225217314949 - type: nauc_ndcg_at_1_max value: 32.78250943893229 - type: nauc_ndcg_at_1_std value: 6.2994995170778845 - type: nauc_ndcg_at_20_diff1 value: 57.194338622478966 - type: nauc_ndcg_at_20_max value: 42.87305570893914 - type: nauc_ndcg_at_20_std value: 17.722159014890266 - type: nauc_ndcg_at_3_diff1 value: 59.87720937097173 - type: nauc_ndcg_at_3_max value: 39.382390067554226 - type: nauc_ndcg_at_3_std value: 13.309064283277102 - type: nauc_ndcg_at_5_diff1 value: 59.43934315091936 - type: nauc_ndcg_at_5_max value: 40.3233098967969 - type: nauc_ndcg_at_5_std value: 13.866407342104736 - type: nauc_precision_at_1000_diff1 value: -25.868335387629987 - type: nauc_precision_at_1000_max value: 30.021175057678224 - type: nauc_precision_at_1000_std value: 48.10593849751903 - type: nauc_precision_at_100_diff1 value: -9.246187406942138 - type: nauc_precision_at_100_max value: 38.74029847793067 - type: nauc_precision_at_100_std value: 46.21609723039785 - type: nauc_precision_at_10_diff1 value: 14.57423342836458 - type: nauc_precision_at_10_max value: 53.79340709160492 - type: nauc_precision_at_10_std value: 51.46731880835919 - type: nauc_precision_at_1_diff1 value: 64.15225217314949 - type: nauc_precision_at_1_max value: 32.78250943893229 - type: nauc_precision_at_1_std value: 6.2994995170778845 - type: nauc_precision_at_20_diff1 value: 7.661113308500396 - type: nauc_precision_at_20_max value: 48.52849772461544 - type: nauc_precision_at_20_std value: 50.06541458940931 - type: nauc_precision_at_3_diff1 value: 45.45337537808087 - type: nauc_precision_at_3_max value: 49.37622387908755 - type: nauc_precision_at_3_std value: 31.44254548463311 - type: nauc_precision_at_5_diff1 value: 37.18964707686113 - type: nauc_precision_at_5_max value: 52.78467184760005 - type: nauc_precision_at_5_std value: 37.360215414394794 - type: nauc_recall_at_1000_diff1 value: 2.4976657329602574 - type: nauc_recall_at_1000_max value: 89.30127606598155 - type: nauc_recall_at_1000_std value: 88.21195144724587 - type: nauc_recall_at_100_diff1 value: 32.06651881995142 - type: nauc_recall_at_100_max value: 61.723309041324924 - type: nauc_recall_at_100_std value: 40.02293962790224 - type: nauc_recall_at_10_diff1 value: 43.70825842063566 - type: nauc_recall_at_10_max value: 56.410600790424525 - type: nauc_recall_at_10_std value: 37.41756199503754 - type: nauc_recall_at_1_diff1 value: 65.53217534037644 - type: nauc_recall_at_1_max value: 28.820516516628203 - type: nauc_recall_at_1_std value: 0.31747544825532187 - type: nauc_recall_at_20_diff1 value: 42.28136019198356 - type: nauc_recall_at_20_max value: 58.728571008292086 - type: nauc_recall_at_20_std value: 38.78438410167401 - type: nauc_recall_at_3_diff1 value: 55.25292186933368 - type: nauc_recall_at_3_max value: 42.11001146172373 - type: nauc_recall_at_3_std value: 18.099532745235926 - type: nauc_recall_at_5_diff1 value: 53.15377657121759 - type: nauc_recall_at_5_max value: 45.75866806524819 - type: nauc_recall_at_5_std value: 20.94017969011607 - type: ndcg_at_1 value: 46.666999999999994 - type: ndcg_at_10 value: 60.521 - type: ndcg_at_100 value: 63.466 - type: ndcg_at_1000 value: 64.517 - type: ndcg_at_20 value: 61.632 - type: ndcg_at_3 value: 54.832 - type: ndcg_at_5 value: 57.459 - type: precision_at_1 value: 46.666999999999994 - type: precision_at_10 value: 8.6 - type: precision_at_100 value: 1.023 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_20 value: 4.567 - type: precision_at_3 value: 21.778 - type: precision_at_5 value: 14.667 - type: recall_at_1 value: 44.622 - type: recall_at_10 value: 75.98899999999999 - type: recall_at_100 value: 89.767 - type: recall_at_1000 value: 98.0 - type: recall_at_20 value: 80.167 - type: recall_at_3 value: 60.633 - type: recall_at_5 value: 67.161 task: type: Retrieval - dataset: config: default name: MTEB SciFact (default) revision: 0228b52cf27578f30900b9e5271d331663a030d7 split: train type: mteb/scifact metrics: - type: main_score value: 60.968 - type: map_at_1 value: 45.56 - type: map_at_10 value: 55.931 - type: map_at_100 value: 56.742000000000004 - type: map_at_1000 value: 56.776 - type: map_at_20 value: 56.424 - type: map_at_3 value: 53.077 - type: map_at_5 value: 54.869 - type: mrr_at_1 value: 48.084054388133495 - type: mrr_at_10 value: 57.47807404791333 - type: mrr_at_100 value: 58.099160087560875 - type: mrr_at_1000 value: 58.12765167354059 - type: mrr_at_20 value: 57.819082752169095 - type: mrr_at_3 value: 55.356407086938574 - type: mrr_at_5 value: 56.74701277297069 - type: nauc_map_at_1000_diff1 value: 55.74211749136413 - type: nauc_map_at_1000_max value: 38.56607968837275 - type: nauc_map_at_1000_std value: 8.843116088465356 - type: nauc_map_at_100_diff1 value: 55.73174781950032 - type: nauc_map_at_100_max value: 38.58911494708111 - type: nauc_map_at_100_std value: 8.88289997368927 - type: nauc_map_at_10_diff1 value: 55.684855899450326 - type: nauc_map_at_10_max value: 38.61699467700117 - type: nauc_map_at_10_std value: 8.843324283472004 - type: nauc_map_at_1_diff1 value: 60.51537312610602 - type: nauc_map_at_1_max value: 33.79559472004539 - type: nauc_map_at_1_std value: 2.231571936720321 - type: nauc_map_at_20_diff1 value: 55.64541576881228 - type: nauc_map_at_20_max value: 38.46528450367595 - type: nauc_map_at_20_std value: 8.941393850487122 - type: nauc_map_at_3_diff1 value: 56.72487646415579 - type: nauc_map_at_3_max value: 35.519310437502725 - type: nauc_map_at_3_std value: 5.488759577412841 - type: nauc_map_at_5_diff1 value: 55.803777316483064 - type: nauc_map_at_5_max value: 37.41665211733391 - type: nauc_map_at_5_std value: 7.806708616433203 - type: nauc_mrr_at_1000_diff1 value: 54.81790389145318 - type: nauc_mrr_at_1000_max value: 39.86866604814943 - type: nauc_mrr_at_1000_std value: 9.635106150864706 - type: nauc_mrr_at_100_diff1 value: 54.81237221566927 - type: nauc_mrr_at_100_max value: 39.8961644304281 - type: nauc_mrr_at_100_std value: 9.675552350620915 - type: nauc_mrr_at_10_diff1 value: 54.61930534460625 - type: nauc_mrr_at_10_max value: 39.92147432792213 - type: nauc_mrr_at_10_std value: 9.757897616300196 - type: nauc_mrr_at_1_diff1 value: 59.09832131205337 - type: nauc_mrr_at_1_max value: 37.460196328629216 - type: nauc_mrr_at_1_std value: 4.083199168195615 - type: nauc_mrr_at_20_diff1 value: 54.71252617109308 - type: nauc_mrr_at_20_max value: 39.816810748191386 - type: nauc_mrr_at_20_std value: 9.74015716919154 - type: nauc_mrr_at_3_diff1 value: 54.84574115652329 - type: nauc_mrr_at_3_max value: 38.93977163525027 - type: nauc_mrr_at_3_std value: 7.5157914615030155 - type: nauc_mrr_at_5_diff1 value: 54.28866742264963 - type: nauc_mrr_at_5_max value: 39.857609786127355 - type: nauc_mrr_at_5_std value: 9.748147844235504 - type: nauc_ndcg_at_1000_diff1 value: 54.828301594419926 - type: nauc_ndcg_at_1000_max value: 40.792175720018 - type: nauc_ndcg_at_1000_std value: 11.993991378627298 - type: nauc_ndcg_at_100_diff1 value: 54.56732091443972 - type: nauc_ndcg_at_100_max value: 41.520966863404 - type: nauc_ndcg_at_100_std value: 13.093045429472133 - type: nauc_ndcg_at_10_diff1 value: 53.867885134870754 - type: nauc_ndcg_at_10_max value: 41.26263077190897 - type: nauc_ndcg_at_10_std value: 13.180315525575079 - type: nauc_ndcg_at_1_diff1 value: 59.09832131205337 - type: nauc_ndcg_at_1_max value: 37.460196328629216 - type: nauc_ndcg_at_1_std value: 4.083199168195615 - type: nauc_ndcg_at_20_diff1 value: 54.04981222269078 - type: nauc_ndcg_at_20_max value: 40.804574285983215 - type: nauc_ndcg_at_20_std value: 13.25303328684406 - type: nauc_ndcg_at_3_diff1 value: 54.98672360900976 - type: nauc_ndcg_at_3_max value: 37.021872876909285 - type: nauc_ndcg_at_3_std value: 7.369741800312488 - type: nauc_ndcg_at_5_diff1 value: 53.738484591338775 - type: nauc_ndcg_at_5_max value: 39.476934276971114 - type: nauc_ndcg_at_5_std value: 11.468428891779542 - type: nauc_precision_at_1000_diff1 value: -19.823126106144006 - type: nauc_precision_at_1000_max value: 30.52146999323513 - type: nauc_precision_at_1000_std value: 29.933270069329826 - type: nauc_precision_at_100_diff1 value: -6.440029104435929 - type: nauc_precision_at_100_max value: 39.331835639782156 - type: nauc_precision_at_100_std value: 34.94900154413054 - type: nauc_precision_at_10_diff1 value: 16.817759554481484 - type: nauc_precision_at_10_max value: 49.249021274893906 - type: nauc_precision_at_10_std value: 33.75171756578377 - type: nauc_precision_at_1_diff1 value: 59.09832131205337 - type: nauc_precision_at_1_max value: 37.460196328629216 - type: nauc_precision_at_1_std value: 4.083199168195615 - type: nauc_precision_at_20_diff1 value: 9.157889901211199 - type: nauc_precision_at_20_max value: 41.571198827265164 - type: nauc_precision_at_20_std value: 32.65877960233835 - type: nauc_precision_at_3_diff1 value: 39.681670852445436 - type: nauc_precision_at_3_max value: 42.86111363688067 - type: nauc_precision_at_3_std value: 14.510088215612296 - type: nauc_precision_at_5_diff1 value: 27.384565276080302 - type: nauc_precision_at_5_max value: 47.40454163560573 - type: nauc_precision_at_5_std value: 25.867913617850697 - type: nauc_recall_at_1000_diff1 value: 71.51075746149512 - type: nauc_recall_at_1000_max value: 76.34440349098203 - type: nauc_recall_at_1000_std value: 80.44096320223078 - type: nauc_recall_at_100_diff1 value: 51.71053059895534 - type: nauc_recall_at_100_max value: 66.88452980621679 - type: nauc_recall_at_100_std value: 53.499016266851065 - type: nauc_recall_at_10_diff1 value: 47.702505621629825 - type: nauc_recall_at_10_max value: 48.18585866434868 - type: nauc_recall_at_10_std value: 28.809966003226883 - type: nauc_recall_at_1_diff1 value: 60.51537312610602 - type: nauc_recall_at_1_max value: 33.79559472004539 - type: nauc_recall_at_1_std value: 2.231571936720321 - type: nauc_recall_at_20_diff1 value: 48.95315412443835 - type: nauc_recall_at_20_max value: 47.846213097193406 - type: nauc_recall_at_20_std value: 32.44656771521295 - type: nauc_recall_at_3_diff1 value: 51.09488539609923 - type: nauc_recall_at_3_max value: 36.21610788720492 - type: nauc_recall_at_3_std value: 10.351917684505048 - type: nauc_recall_at_5_diff1 value: 47.52872180072941 - type: nauc_recall_at_5_max value: 42.41341732768043 - type: nauc_recall_at_5_std value: 22.031877672306795 - type: ndcg_at_1 value: 48.083999999999996 - type: ndcg_at_10 value: 60.968 - type: ndcg_at_100 value: 64.68199999999999 - type: ndcg_at_1000 value: 65.522 - type: ndcg_at_20 value: 62.50599999999999 - type: ndcg_at_3 value: 55.95 - type: ndcg_at_5 value: 58.75899999999999 - type: precision_at_1 value: 48.083999999999996 - type: precision_at_10 value: 8.455 - type: precision_at_100 value: 1.051 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_20 value: 4.604 - type: precision_at_3 value: 22.415 - type: precision_at_5 value: 15.129999999999999 - type: recall_at_1 value: 45.56 - type: recall_at_10 value: 74.714 - type: recall_at_100 value: 91.862 - type: recall_at_1000 value: 98.393 - type: recall_at_20 value: 80.449 - type: recall_at_3 value: 61.747 - type: recall_at_5 value: 68.381 task: type: Retrieval - dataset: config: default name: MTEB SprintDuplicateQuestions (default) revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 split: test type: mteb/sprintduplicatequestions-pairclassification metrics: - type: cosine_accuracy value: 99.52376237623763 - type: cosine_accuracy_threshold value: 83.40442776679993 - type: cosine_ap value: 82.29539182575489 - type: cosine_f1 value: 73.40139709833423 - type: cosine_f1_threshold value: 81.47163987159729 - type: cosine_precision value: 79.32636469221835 - type: cosine_recall value: 68.30000000000001 - type: dot_accuracy value: 99.52376237623763 - type: dot_accuracy_threshold value: 83.40442776679993 - type: dot_ap value: 82.29539339884745 - type: dot_f1 value: 73.40139709833423 - type: dot_f1_threshold value: 81.47163391113281 - type: dot_precision value: 79.32636469221835 - type: dot_recall value: 68.30000000000001 - type: euclidean_accuracy value: 99.52376237623763 - type: euclidean_accuracy_threshold value: 57.61163830757141 - type: euclidean_ap value: 82.29539339884744 - type: euclidean_f1 value: 73.40139709833423 - type: euclidean_f1_threshold value: 60.8742356300354 - type: euclidean_precision value: 79.32636469221835 - type: euclidean_recall value: 68.30000000000001 - type: main_score value: 82.29539339884745 - type: manhattan_accuracy value: 99.50990099009901 - type: manhattan_accuracy_threshold value: 1265.035057067871 - type: manhattan_ap value: 81.78663637659726 - type: manhattan_f1 value: 73.2620320855615 - type: manhattan_f1_threshold value: 1333.2223892211914 - type: manhattan_precision value: 78.73563218390804 - type: manhattan_recall value: 68.5 - type: max_accuracy value: 99.52376237623763 - type: max_ap value: 82.29539339884745 - type: max_f1 value: 73.40139709833423 - type: max_precision value: 79.32636469221835 - type: max_recall value: 68.5 - type: similarity_accuracy value: 99.52376237623763 - type: similarity_accuracy_threshold value: 83.40443968772888 - type: similarity_ap value: 82.2953820339594 - type: similarity_f1 value: 73.40139709833423 - type: similarity_f1_threshold value: 81.47163987159729 - type: similarity_precision value: 79.32636469221835 - type: similarity_recall value: 68.30000000000001 task: type: PairClassification - dataset: config: default name: MTEB SprintDuplicateQuestions (default) revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 split: validation type: mteb/sprintduplicatequestions-pairclassification metrics: - type: cosine_accuracy value: 99.4990099009901 - type: cosine_accuracy_threshold value: 80.80509901046753 - type: cosine_ap value: 82.11932803714892 - type: cosine_f1 value: 73.94724664507173 - type: cosine_f1_threshold value: 75.20748376846313 - type: cosine_precision value: 68.81998277347114 - type: cosine_recall value: 79.9 - type: dot_accuracy value: 99.4990099009901 - type: dot_accuracy_threshold value: 80.80509901046753 - type: dot_ap value: 82.11932803714892 - type: dot_f1 value: 73.94724664507173 - type: dot_f1_threshold value: 75.20748376846313 - type: dot_precision value: 68.81998277347114 - type: dot_recall value: 79.9 - type: euclidean_accuracy value: 99.4990099009901 - type: euclidean_accuracy_threshold value: 61.959511041641235 - type: euclidean_ap value: 82.11932803714893 - type: euclidean_f1 value: 73.94724664507173 - type: euclidean_f1_threshold value: 70.41663527488708 - type: euclidean_precision value: 68.81998277347114 - type: euclidean_recall value: 79.9 - type: main_score value: 82.11932803714893 - type: manhattan_accuracy value: 99.48910891089109 - type: manhattan_accuracy_threshold value: 1331.6292762756348 - type: manhattan_ap value: 81.61037880327734 - type: manhattan_f1 value: 73.582295988935 - type: manhattan_f1_threshold value: 1543.6763763427734 - type: manhattan_precision value: 68.26347305389223 - type: manhattan_recall value: 79.80000000000001 - type: max_accuracy value: 99.4990099009901 - type: max_ap value: 82.11932803714893 - type: max_f1 value: 73.94724664507173 - type: max_precision value: 68.81998277347114 - type: max_recall value: 79.9 - type: similarity_accuracy value: 99.4990099009901 - type: similarity_accuracy_threshold value: 80.80509901046753 - type: similarity_ap value: 82.11932803714892 - type: similarity_f1 value: 73.94724664507173 - type: similarity_f1_threshold value: 75.20748376846313 - type: similarity_precision value: 68.81998277347114 - type: similarity_recall value: 79.9 task: type: PairClassification - dataset: config: default name: MTEB StackOverflowDupQuestions (default) revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 split: test type: mteb/stackoverflowdupquestions-reranking metrics: - type: main_score value: 50.48960206352392 - type: map value: 50.48960206352392 - type: mrr value: 51.35440539852305 - type: nAUC_map_diff1 value: 34.53252917797536 - type: nAUC_map_max value: 12.790792149348837 - type: nAUC_map_std value: 6.669172097020934 - type: nAUC_mrr_diff1 value: 35.200389562175985 - type: nAUC_mrr_max value: 13.668483379551915 - type: nAUC_mrr_std value: 6.28436956387114 task: type: Reranking - dataset: config: default name: MTEB TRECCOVID (default) revision: bb9466bac8153a0349341eb1b22e06409e78ef4e split: test type: mteb/trec-covid metrics: - type: main_score value: 56.266000000000005 - type: map_at_1 value: 0.193 - type: map_at_10 value: 1.3050000000000002 - type: map_at_100 value: 7.3340000000000005 - type: map_at_1000 value: 19.07 - type: map_at_20 value: 2.246 - type: map_at_3 value: 0.46699999999999997 - type: map_at_5 value: 0.733 - type: mrr_at_1 value: 64.0 - type: mrr_at_10 value: 77.06666666666666 - type: mrr_at_100 value: 77.24651162790698 - type: mrr_at_1000 value: 77.24651162790698 - type: mrr_at_20 value: 77.2 - type: mrr_at_3 value: 75.33333333333334 - type: mrr_at_5 value: 76.73333333333332 - type: nauc_map_at_1000_diff1 value: -24.73401101431591 - type: nauc_map_at_1000_max value: 50.802272967189666 - type: nauc_map_at_1000_std value: 86.72287095417501 - type: nauc_map_at_100_diff1 value: -25.085120642600828 - type: nauc_map_at_100_max value: 49.31639831109537 - type: nauc_map_at_100_std value: 71.17329215601556 - type: nauc_map_at_10_diff1 value: -14.614132651074147 - type: nauc_map_at_10_max value: 32.721906387813256 - type: nauc_map_at_10_std value: 25.60590793742716 - type: nauc_map_at_1_diff1 value: -4.218583653029754 - type: nauc_map_at_1_max value: 18.589859656136742 - type: nauc_map_at_1_std value: -0.7303444219812425 - type: nauc_map_at_20_diff1 value: -17.63292260337728 - type: nauc_map_at_20_max value: 36.89990317740711 - type: nauc_map_at_20_std value: 36.25627402017833 - type: nauc_map_at_3_diff1 value: -10.619502082953264 - type: nauc_map_at_3_max value: 26.141633359773707 - type: nauc_map_at_3_std value: 9.472414121311855 - type: nauc_map_at_5_diff1 value: -10.032105512858019 - type: nauc_map_at_5_max value: 30.19477111697671 - type: nauc_map_at_5_std value: 13.615465098322698 - type: nauc_mrr_at_1000_diff1 value: 18.351107936542924 - type: nauc_mrr_at_1000_max value: 32.724989617966784 - type: nauc_mrr_at_1000_std value: 11.379920959476115 - type: nauc_mrr_at_100_diff1 value: 18.351107936542924 - type: nauc_mrr_at_100_max value: 32.724989617966784 - type: nauc_mrr_at_100_std value: 11.379920959476115 - type: nauc_mrr_at_10_diff1 value: 18.810199118309463 - type: nauc_mrr_at_10_max value: 32.90746100091763 - type: nauc_mrr_at_10_std value: 12.170087976539595 - type: nauc_mrr_at_1_diff1 value: 22.953123004215087 - type: nauc_mrr_at_1_max value: 33.759100779154394 - type: nauc_mrr_at_1_std value: 5.521139353684961 - type: nauc_mrr_at_20_diff1 value: 18.440581264618835 - type: nauc_mrr_at_20_max value: 32.887391287492 - type: nauc_mrr_at_20_std value: 11.562294043911466 - type: nauc_mrr_at_3_diff1 value: 13.326882737022542 - type: nauc_mrr_at_3_max value: 31.03893993559256 - type: nauc_mrr_at_3_std value: 12.094868082843375 - type: nauc_mrr_at_5_diff1 value: 15.553466717555938 - type: nauc_mrr_at_5_max value: 31.399808850294487 - type: nauc_mrr_at_5_std value: 13.184440540139914 - type: nauc_ndcg_at_1000_diff1 value: -21.89727935740918 - type: nauc_ndcg_at_1000_max value: 46.822006422768894 - type: nauc_ndcg_at_1000_std value: 81.91290246912763 - type: nauc_ndcg_at_100_diff1 value: -17.48350814577877 - type: nauc_ndcg_at_100_max value: 46.79297448451001 - type: nauc_ndcg_at_100_std value: 78.8401319727116 - type: nauc_ndcg_at_10_diff1 value: -1.3821765420279954 - type: nauc_ndcg_at_10_max value: 39.99897786678473 - type: nauc_ndcg_at_10_std value: 46.364065597834106 - type: nauc_ndcg_at_1_diff1 value: 13.291418162587759 - type: nauc_ndcg_at_1_max value: 39.50553809090106 - type: nauc_ndcg_at_1_std value: 17.240062285160228 - type: nauc_ndcg_at_20_diff1 value: -5.006533721396937 - type: nauc_ndcg_at_20_max value: 40.08638499376968 - type: nauc_ndcg_at_20_std value: 54.21211699915291 - type: nauc_ndcg_at_3_diff1 value: 1.1177340010993853 - type: nauc_ndcg_at_3_max value: 31.249409630478247 - type: nauc_ndcg_at_3_std value: 23.5436216578103 - type: nauc_ndcg_at_5_diff1 value: -1.1160397524543895 - type: nauc_ndcg_at_5_max value: 34.660234833569106 - type: nauc_ndcg_at_5_std value: 32.63370377000179 - type: nauc_precision_at_1000_diff1 value: -13.0177453890889 - type: nauc_precision_at_1000_max value: 30.72811980155358 - type: nauc_precision_at_1000_std value: 63.304048454768505 - type: nauc_precision_at_100_diff1 value: -17.976796177452165 - type: nauc_precision_at_100_max value: 46.981157094069914 - type: nauc_precision_at_100_std value: 82.25965581862002 - type: nauc_precision_at_10_diff1 value: -0.615847761785869 - type: nauc_precision_at_10_max value: 43.227090827429116 - type: nauc_precision_at_10_std value: 54.13755012361465 - type: nauc_precision_at_1_diff1 value: 22.953123004215087 - type: nauc_precision_at_1_max value: 33.759100779154394 - type: nauc_precision_at_1_std value: 5.521139353684961 - type: nauc_precision_at_20_diff1 value: -3.9540692673731157 - type: nauc_precision_at_20_max value: 44.35467403164495 - type: nauc_precision_at_20_std value: 60.82047452956477 - type: nauc_precision_at_3_diff1 value: 3.020914020139455 - type: nauc_precision_at_3_max value: 30.52731931120773 - type: nauc_precision_at_3_std value: 23.58338795209433 - type: nauc_precision_at_5_diff1 value: -1.4227357063785067 - type: nauc_precision_at_5_max value: 37.08155978608297 - type: nauc_precision_at_5_std value: 37.53293730607916 - type: nauc_recall_at_1000_diff1 value: -20.649212891265538 - type: nauc_recall_at_1000_max value: 43.07097184316376 - type: nauc_recall_at_1000_std value: 77.81281303329497 - type: nauc_recall_at_100_diff1 value: -25.20899327705024 - type: nauc_recall_at_100_max value: 44.06686144213573 - type: nauc_recall_at_100_std value: 64.51149000534564 - type: nauc_recall_at_10_diff1 value: -16.705615179862683 - type: nauc_recall_at_10_max value: 27.971302554687405 - type: nauc_recall_at_10_std value: 24.823568423744703 - type: nauc_recall_at_1_diff1 value: -4.218583653029754 - type: nauc_recall_at_1_max value: 18.589859656136742 - type: nauc_recall_at_1_std value: -0.7303444219812425 - type: nauc_recall_at_20_diff1 value: -18.61953310144389 - type: nauc_recall_at_20_max value: 32.59370606161681 - type: nauc_recall_at_20_std value: 33.65262269593963 - type: nauc_recall_at_3_diff1 value: -12.73760452556664 - type: nauc_recall_at_3_max value: 23.76654241253409 - type: nauc_recall_at_3_std value: 9.977574788163437 - type: nauc_recall_at_5_diff1 value: -13.251480959967921 - type: nauc_recall_at_5_max value: 28.794884382741404 - type: nauc_recall_at_5_std value: 15.784579094515388 - type: ndcg_at_1 value: 57.99999999999999 - type: ndcg_at_10 value: 56.266000000000005 - type: ndcg_at_100 value: 43.847 - type: ndcg_at_1000 value: 42.278 - type: ndcg_at_20 value: 52.928 - type: ndcg_at_3 value: 58.531 - type: ndcg_at_5 value: 58.792 - type: precision_at_1 value: 64.0 - type: precision_at_10 value: 60.6 - type: precision_at_100 value: 45.28 - type: precision_at_1000 value: 19.122 - type: precision_at_20 value: 56.00000000000001 - type: precision_at_3 value: 63.333 - type: precision_at_5 value: 64.4 - type: recall_at_1 value: 0.193 - type: recall_at_10 value: 1.5890000000000002 - type: recall_at_100 value: 10.881 - type: recall_at_1000 value: 40.844 - type: recall_at_20 value: 2.861 - type: recall_at_3 value: 0.5 - type: recall_at_5 value: 0.835 task: type: Retrieval - dataset: config: default name: MTEB Touche2020 (default) revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f split: test type: mteb/touche2020 metrics: - type: main_score value: 18.801000000000002 - type: map_at_1 value: 1.522 - type: map_at_10 value: 7.117999999999999 - type: map_at_100 value: 12.491 - type: map_at_1000 value: 13.913 - type: map_at_20 value: 9.014999999999999 - type: map_at_3 value: 4.144 - type: map_at_5 value: 5.192 - type: mrr_at_1 value: 20.408163265306122 - type: mrr_at_10 value: 34.63637836086815 - type: mrr_at_100 value: 35.855625090003045 - type: mrr_at_1000 value: 35.9025357255321 - type: mrr_at_20 value: 35.528481245452234 - type: mrr_at_3 value: 29.931972789115648 - type: mrr_at_5 value: 31.87074829931973 - type: nauc_map_at_1000_diff1 value: 2.2039398587625088 - type: nauc_map_at_1000_max value: -7.1185216957557005 - type: nauc_map_at_1000_std value: 13.743946451796427 - type: nauc_map_at_100_diff1 value: 2.352298138176251 - type: nauc_map_at_100_max value: -7.6894882871790236 - type: nauc_map_at_100_std value: 9.423702448674415 - type: nauc_map_at_10_diff1 value: -0.7996006244796997 - type: nauc_map_at_10_max value: -3.883617316378952 - type: nauc_map_at_10_std value: -9.010162215584359 - type: nauc_map_at_1_diff1 value: -14.540804131417854 - type: nauc_map_at_1_max value: -5.1573008851326065 - type: nauc_map_at_1_std value: -7.656042570304873 - type: nauc_map_at_20_diff1 value: 2.4262621068257135 - type: nauc_map_at_20_max value: -2.797719479935332 - type: nauc_map_at_20_std value: -6.492723413671281 - type: nauc_map_at_3_diff1 value: -12.76977998910709 - type: nauc_map_at_3_max value: -5.4884334672841995 - type: nauc_map_at_3_std value: -8.027896734726257 - type: nauc_map_at_5_diff1 value: -3.649569377955254 - type: nauc_map_at_5_max value: -11.595839414332854 - type: nauc_map_at_5_std value: -8.830419718907086 - type: nauc_mrr_at_1000_diff1 value: -0.6197732098443051 - type: nauc_mrr_at_1000_max value: -24.253402425330417 - type: nauc_mrr_at_1000_std value: 10.37771989076477 - type: nauc_mrr_at_100_diff1 value: -0.6376238919106149 - type: nauc_mrr_at_100_max value: -24.342213789915842 - type: nauc_mrr_at_100_std value: 10.51412622043574 - type: nauc_mrr_at_10_diff1 value: 0.16993164166959274 - type: nauc_mrr_at_10_max value: -25.6197772706814 - type: nauc_mrr_at_10_std value: 9.042003512118251 - type: nauc_mrr_at_1_diff1 value: -4.3668409896483125 - type: nauc_mrr_at_1_max value: -10.908434972927616 - type: nauc_mrr_at_1_std value: 1.8363187629423776 - type: nauc_mrr_at_20_diff1 value: -0.4751612608893524 - type: nauc_mrr_at_20_max value: -25.330726501014965 - type: nauc_mrr_at_20_std value: 11.110113131699444 - type: nauc_mrr_at_3_diff1 value: 1.1477476210783253 - type: nauc_mrr_at_3_max value: -18.50801052487565 - type: nauc_mrr_at_3_std value: 7.812899024458835 - type: nauc_mrr_at_5_diff1 value: 0.6521469927673968 - type: nauc_mrr_at_5_max value: -21.416210744034153 - type: nauc_mrr_at_5_std value: 11.504256088841963 - type: nauc_ndcg_at_1000_diff1 value: 3.4056842894504022 - type: nauc_ndcg_at_1000_max value: -20.75643760341645 - type: nauc_ndcg_at_1000_std value: 40.59382673351137 - type: nauc_ndcg_at_100_diff1 value: 2.6307496691299748 - type: nauc_ndcg_at_100_max value: -25.295593466700737 - type: nauc_ndcg_at_100_std value: 31.39618142617403 - type: nauc_ndcg_at_10_diff1 value: 5.002963427746491 - type: nauc_ndcg_at_10_max value: -21.675189721633814 - type: nauc_ndcg_at_10_std value: -1.469843197474695 - type: nauc_ndcg_at_1_diff1 value: -9.16604671836183 - type: nauc_ndcg_at_1_max value: -14.823569484107907 - type: nauc_ndcg_at_1_std value: 2.7125738258252983 - type: nauc_ndcg_at_20_diff1 value: 4.383118964859878 - type: nauc_ndcg_at_20_max value: -22.34000929195479 - type: nauc_ndcg_at_20_std value: 1.7471883582083971 - type: nauc_ndcg_at_3_diff1 value: -4.534899565092995 - type: nauc_ndcg_at_3_max value: -23.884347314346275 - type: nauc_ndcg_at_3_std value: 5.406857104296577 - type: nauc_ndcg_at_5_diff1 value: 5.404901375866572 - type: nauc_ndcg_at_5_max value: -25.33284917518358 - type: nauc_ndcg_at_5_std value: 3.3375579303514655 - type: nauc_precision_at_1000_diff1 value: -0.4809491820541609 - type: nauc_precision_at_1000_max value: 14.095928757973825 - type: nauc_precision_at_1000_std value: 40.99850413556261 - type: nauc_precision_at_100_diff1 value: 2.4829117900025968 - type: nauc_precision_at_100_max value: -19.358122626022247 - type: nauc_precision_at_100_std value: 69.16347293290907 - type: nauc_precision_at_10_diff1 value: 17.1708621551976 - type: nauc_precision_at_10_max value: -18.120442425032554 - type: nauc_precision_at_10_std value: 3.345789829578046 - type: nauc_precision_at_1_diff1 value: -4.3668409896483125 - type: nauc_precision_at_1_max value: -10.908434972927616 - type: nauc_precision_at_1_std value: 1.8363187629423776 - type: nauc_precision_at_20_diff1 value: 14.726770245618805 - type: nauc_precision_at_20_max value: -21.815069886538808 - type: nauc_precision_at_20_std value: 18.892811452623157 - type: nauc_precision_at_3_diff1 value: 0.9886305353262563 - type: nauc_precision_at_3_max value: -20.231039993615312 - type: nauc_precision_at_3_std value: 6.358708074233125 - type: nauc_precision_at_5_diff1 value: 15.348665084122503 - type: nauc_precision_at_5_max value: -23.255200414646325 - type: nauc_precision_at_5_std value: 5.968161709012153 - type: nauc_recall_at_1000_diff1 value: 4.060744763807052 - type: nauc_recall_at_1000_max value: -11.386644024112234 - type: nauc_recall_at_1000_std value: 77.38341256255143 - type: nauc_recall_at_100_diff1 value: 3.0602490159843883 - type: nauc_recall_at_100_max value: -23.133334130489928 - type: nauc_recall_at_100_std value: 42.72840616542226 - type: nauc_recall_at_10_diff1 value: 8.794442377965375 - type: nauc_recall_at_10_max value: -11.666849404762925 - type: nauc_recall_at_10_std value: -7.829821703241252 - type: nauc_recall_at_1_diff1 value: -14.540804131417854 - type: nauc_recall_at_1_max value: -5.1573008851326065 - type: nauc_recall_at_1_std value: -7.656042570304873 - type: nauc_recall_at_20_diff1 value: 9.614735799856538 - type: nauc_recall_at_20_max value: -14.765691684991822 - type: nauc_recall_at_20_std value: 0.612424582124656 - type: nauc_recall_at_3_diff1 value: -6.910547196179005 - type: nauc_recall_at_3_max value: -9.544659311113964 - type: nauc_recall_at_3_std value: -5.046214274501399 - type: nauc_recall_at_5_diff1 value: 6.384770222182381 - type: nauc_recall_at_5_max value: -19.233502105079932 - type: nauc_recall_at_5_std value: -5.566077739772719 - type: ndcg_at_1 value: 18.367 - type: ndcg_at_10 value: 18.801000000000002 - type: ndcg_at_100 value: 30.296 - type: ndcg_at_1000 value: 42.018 - type: ndcg_at_20 value: 20.14 - type: ndcg_at_3 value: 20.96 - type: ndcg_at_5 value: 19.683999999999997 - type: precision_at_1 value: 20.408 - type: precision_at_10 value: 16.531000000000002 - type: precision_at_100 value: 6.49 - type: precision_at_1000 value: 1.443 - type: precision_at_20 value: 13.776 - type: precision_at_3 value: 22.448999999999998 - type: precision_at_5 value: 19.592000000000002 - type: recall_at_1 value: 1.522 - type: recall_at_10 value: 12.255 - type: recall_at_100 value: 41.64 - type: recall_at_1000 value: 77.46499999999999 - type: recall_at_20 value: 19.136 - type: recall_at_3 value: 5.113 - type: recall_at_5 value: 7.348000000000001 task: type: Retrieval - dataset: config: default name: MTEB TwitterSemEval2015 (default) revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 split: test type: mteb/twittersemeval2015-pairclassification metrics: - type: cosine_accuracy value: 86.35036061274364 - type: cosine_accuracy_threshold value: 78.45808267593384 - type: cosine_ap value: 74.3562958182802 - type: cosine_f1 value: 69.23173016864459 - type: cosine_f1_threshold value: 75.69116353988647 - type: cosine_precision value: 65.74139976275207 - type: cosine_recall value: 73.11345646437995 - type: dot_accuracy value: 86.35036061274364 - type: dot_accuracy_threshold value: 78.45808267593384 - type: dot_ap value: 74.35629143935617 - type: dot_f1 value: 69.23173016864459 - type: dot_f1_threshold value: 75.69116353988647 - type: dot_precision value: 65.74139976275207 - type: dot_recall value: 73.11345646437995 - type: euclidean_accuracy value: 86.35036061274364 - type: euclidean_accuracy_threshold value: 65.63827991485596 - type: euclidean_ap value: 74.35629336635377 - type: euclidean_f1 value: 69.23173016864459 - type: euclidean_f1_threshold value: 69.72637176513672 - type: euclidean_precision value: 65.74139976275207 - type: euclidean_recall value: 73.11345646437995 - type: main_score value: 74.3562958182802 - type: manhattan_accuracy value: 86.30267628300649 - type: manhattan_accuracy_threshold value: 1388.2848739624023 - type: manhattan_ap value: 74.34214111115519 - type: manhattan_f1 value: 69.18905284399041 - type: manhattan_f1_threshold value: 1506.6452026367188 - type: manhattan_precision value: 66.27204638801642 - type: manhattan_recall value: 72.37467018469657 - type: max_accuracy value: 86.35036061274364 - type: max_ap value: 74.3562958182802 - type: max_f1 value: 69.23173016864459 - type: max_precision value: 66.27204638801642 - type: max_recall value: 73.11345646437995 - type: similarity_accuracy value: 86.35036061274364 - type: similarity_accuracy_threshold value: 78.4580945968628 - type: similarity_ap value: 74.3562875678525 - type: similarity_f1 value: 69.23173016864459 - type: similarity_f1_threshold value: 75.69118142127991 - type: similarity_precision value: 65.74139976275207 - type: similarity_recall value: 73.11345646437995 task: type: PairClassification - dataset: config: default name: MTEB TwitterURLCorpus (default) revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf split: test type: mteb/twitterurlcorpus-pairclassification metrics: - type: cosine_accuracy value: 88.15539255637054 - type: cosine_accuracy_threshold value: 78.5060167312622 - type: cosine_ap value: 84.15130253353578 - type: cosine_f1 value: 76.15275390479039 - type: cosine_f1_threshold value: 75.17023086547852 - type: cosine_precision value: 73.97996224771308 - type: cosine_recall value: 78.45703726516786 - type: dot_accuracy value: 88.15539255637054 - type: dot_accuracy_threshold value: 78.50601077079773 - type: dot_ap value: 84.15130163532636 - type: dot_f1 value: 76.15275390479039 - type: dot_f1_threshold value: 75.17023086547852 - type: dot_precision value: 73.97996224771308 - type: dot_recall value: 78.45703726516786 - type: euclidean_accuracy value: 88.15539255637054 - type: euclidean_accuracy_threshold value: 65.56521654129028 - type: euclidean_ap value: 84.1513046482876 - type: euclidean_f1 value: 76.15275390479039 - type: euclidean_f1_threshold value: 70.46952247619629 - type: euclidean_precision value: 73.97996224771308 - type: euclidean_recall value: 78.45703726516786 - type: main_score value: 84.1513046482876 - type: manhattan_accuracy value: 88.13404742500097 - type: manhattan_accuracy_threshold value: 1414.7026062011719 - type: manhattan_ap value: 84.12376985849109 - type: manhattan_f1 value: 76.16659806159488 - type: manhattan_f1_threshold value: 1529.8437118530273 - type: manhattan_precision value: 74.09537677466326 - type: manhattan_recall value: 78.35694487218971 - type: max_accuracy value: 88.15539255637054 - type: max_ap value: 84.1513046482876 - type: max_f1 value: 76.16659806159488 - type: max_precision value: 74.09537677466326 - type: max_recall value: 78.45703726516786 - type: similarity_accuracy value: 88.15539255637054 - type: similarity_accuracy_threshold value: 78.5060167312622 - type: similarity_ap value: 84.15130432409686 - type: similarity_f1 value: 76.15275390479039 - type: similarity_f1_threshold value: 75.17023086547852 - type: similarity_precision value: 73.97996224771308 - type: similarity_recall value: 78.45703726516786 task: type: PairClassification --- # FUE This is a sentence transformer model based on all-mpnet-base-v2. ## Usage ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is encoded"] model = SentenceTransformer('FAE') embeddings = model.encode(sentences) print(embeddings) ```
riyadifirman/classbird_1
riyadifirman
2024-10-07T12:48:26Z
197
0
transformers
[ "transformers", "safetensors", "resnet", "image-classification", "generated_from_trainer", "base_model:RobertZ2011/resnet-18-birb", "base_model:finetune:RobertZ2011/resnet-18-birb", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-10-07T12:34:02Z
--- library_name: transformers base_model: RobertZ2011/resnet-18-birb tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: classbird_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # classbird_1 This model is a fine-tuned version of [RobertZ2011/resnet-18-birb](https://huggingface.co/RobertZ2011/resnet-18-birb) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0949 - Accuracy: 0.7726 - Precision: 0.7789 - Recall: 0.7726 - F1: 0.7680 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 2.5479 | 1.0 | 29 | 2.2789 | 0.2946 | 0.3112 | 0.2946 | 0.2571 | | 1.9716 | 2.0 | 58 | 1.9112 | 0.4651 | 0.5401 | 0.4651 | 0.4270 | | 1.506 | 3.0 | 87 | 1.6503 | 0.6202 | 0.6831 | 0.6202 | 0.5933 | | 1.1838 | 4.0 | 116 | 1.4594 | 0.6667 | 0.6855 | 0.6667 | 0.6439 | | 0.9704 | 5.0 | 145 | 1.3127 | 0.7183 | 0.7395 | 0.7183 | 0.7064 | | 0.7997 | 6.0 | 174 | 1.2345 | 0.7468 | 0.7586 | 0.7468 | 0.7410 | | 0.763 | 7.0 | 203 | 1.1520 | 0.7442 | 0.7493 | 0.7442 | 0.7332 | | 0.6448 | 8.0 | 232 | 1.1172 | 0.7597 | 0.7745 | 0.7597 | 0.7531 | | 0.5839 | 9.0 | 261 | 1.0984 | 0.7649 | 0.7753 | 0.7649 | 0.7621 | | 0.5993 | 10.0 | 290 | 1.0949 | 0.7726 | 0.7789 | 0.7726 | 0.7680 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
rg1683/base_bart_conversation
rg1683
2024-10-07T12:38:28Z
106
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-10-07T12:37:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SomeUser675/mentalhealthconvobot2
SomeUser675
2024-10-07T12:19:58Z
7
0
null
[ "safetensors", "blenderbot", "base_model:facebook/blenderbot-400M-distill", "base_model:finetune:facebook/blenderbot-400M-distill", "region:us" ]
null
2024-10-07T12:11:44Z
--- base_model: - facebook/blenderbot-400M-distill ---
Alex01837178373/vikhr_nemo_orpo_dostoevsky_12b_slerp-Q8_0-GGUF
Alex01837178373
2024-10-07T12:17:42Z
9
1
mergekit
[ "mergekit", "gguf", "merge", "llama-cpp", "gguf-my-repo", "ru", "dataset:40umov/dostoevsky", "base_model:IlyaGusev/vikhr_nemo_orpo_dostoevsky_12b_slerp", "base_model:quantized:IlyaGusev/vikhr_nemo_orpo_dostoevsky_12b_slerp", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-07T12:16:45Z
--- base_model: IlyaGusev/vikhr_nemo_orpo_dostoevsky_12b_slerp datasets: - 40umov/dostoevsky language: - ru library_name: mergekit tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # Alex01837178373/vikhr_nemo_orpo_dostoevsky_12b_slerp-Q8_0-GGUF This model was converted to GGUF format from [`IlyaGusev/vikhr_nemo_orpo_dostoevsky_12b_slerp`](https://huggingface.co/IlyaGusev/vikhr_nemo_orpo_dostoevsky_12b_slerp) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/IlyaGusev/vikhr_nemo_orpo_dostoevsky_12b_slerp) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Alex01837178373/vikhr_nemo_orpo_dostoevsky_12b_slerp-Q8_0-GGUF --hf-file vikhr_nemo_orpo_dostoevsky_12b_slerp-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Alex01837178373/vikhr_nemo_orpo_dostoevsky_12b_slerp-Q8_0-GGUF --hf-file vikhr_nemo_orpo_dostoevsky_12b_slerp-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Alex01837178373/vikhr_nemo_orpo_dostoevsky_12b_slerp-Q8_0-GGUF --hf-file vikhr_nemo_orpo_dostoevsky_12b_slerp-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Alex01837178373/vikhr_nemo_orpo_dostoevsky_12b_slerp-Q8_0-GGUF --hf-file vikhr_nemo_orpo_dostoevsky_12b_slerp-q8_0.gguf -c 2048 ```
mariana-coelho-9/whisper-tiny-pt
mariana-coelho-9
2024-10-07T12:14:12Z
78
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "pt", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-10-07T10:40:33Z
--- library_name: transformers language: - pt license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Tiny pt results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 config: pt split: test args: 'config: pt, split: train/test' metrics: - name: Wer type: wer value: 34.98131015002622 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Tiny pt This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.5418 - Wer: 34.9813 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.4834 | 0.8780 | 1000 | 0.5418 | 34.9813 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.0
streit63/donut-base-tabs
streit63
2024-10-07T12:12:39Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "base_model:streit63/donut-base-tabs", "base_model:finetune:streit63/donut-base-tabs", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-10-05T09:43:24Z
--- library_name: transformers base_model: streit63/donut-base-tabs tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-tabs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-tabs This model is a fine-tuned version of [streit63/donut-base-tabs](https://huggingface.co/streit63/donut-base-tabs) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.1+cu118 - Datasets 3.0.1 - Tokenizers 0.20.0
bunnycore/Qwen-2.5-3B-Remix-IQ4_NL-GGUF
bunnycore
2024-10-07T12:10:03Z
6
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:bunnycore/Qwen-2.5-3B-Remix", "base_model:quantized:bunnycore/Qwen-2.5-3B-Remix", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-10-07T12:09:51Z
--- base_model: bunnycore/Qwen-2.5-3B-Remix library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # bunnycore/Qwen-2.5-3B-Remix-IQ4_NL-GGUF This model was converted to GGUF format from [`bunnycore/Qwen-2.5-3B-Remix`](https://huggingface.co/bunnycore/Qwen-2.5-3B-Remix) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/bunnycore/Qwen-2.5-3B-Remix) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo bunnycore/Qwen-2.5-3B-Remix-IQ4_NL-GGUF --hf-file qwen-2.5-3b-remix-iq4_nl-imat.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo bunnycore/Qwen-2.5-3B-Remix-IQ4_NL-GGUF --hf-file qwen-2.5-3b-remix-iq4_nl-imat.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo bunnycore/Qwen-2.5-3B-Remix-IQ4_NL-GGUF --hf-file qwen-2.5-3b-remix-iq4_nl-imat.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo bunnycore/Qwen-2.5-3B-Remix-IQ4_NL-GGUF --hf-file qwen-2.5-3b-remix-iq4_nl-imat.gguf -c 2048 ```
RichardErkhov/ZySec-AI_-_SecurityLLM-gguf
RichardErkhov
2024-10-07T12:02:51Z
226
2
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-07T08:02:38Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) SecurityLLM - GGUF - Model creator: https://huggingface.co/ZySec-AI/ - Original model: https://huggingface.co/ZySec-AI/SecurityLLM/ | Name | Quant method | Size | | ---- | ---- | ---- | | [SecurityLLM.Q2_K.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_SecurityLLM-gguf/blob/main/SecurityLLM.Q2_K.gguf) | Q2_K | 2.53GB | | [SecurityLLM.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_SecurityLLM-gguf/blob/main/SecurityLLM.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [SecurityLLM.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_SecurityLLM-gguf/blob/main/SecurityLLM.IQ3_S.gguf) | IQ3_S | 2.96GB | | [SecurityLLM.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_SecurityLLM-gguf/blob/main/SecurityLLM.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [SecurityLLM.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_SecurityLLM-gguf/blob/main/SecurityLLM.IQ3_M.gguf) | IQ3_M | 3.06GB | | [SecurityLLM.Q3_K.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_SecurityLLM-gguf/blob/main/SecurityLLM.Q3_K.gguf) | Q3_K | 3.28GB | | [SecurityLLM.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_SecurityLLM-gguf/blob/main/SecurityLLM.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [SecurityLLM.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_SecurityLLM-gguf/blob/main/SecurityLLM.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [SecurityLLM.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_SecurityLLM-gguf/blob/main/SecurityLLM.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [SecurityLLM.Q4_0.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_SecurityLLM-gguf/blob/main/SecurityLLM.Q4_0.gguf) | Q4_0 | 3.83GB | | [SecurityLLM.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_SecurityLLM-gguf/blob/main/SecurityLLM.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [SecurityLLM.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_SecurityLLM-gguf/blob/main/SecurityLLM.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [SecurityLLM.Q4_K.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_SecurityLLM-gguf/blob/main/SecurityLLM.Q4_K.gguf) | Q4_K | 4.07GB | | [SecurityLLM.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_SecurityLLM-gguf/blob/main/SecurityLLM.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [SecurityLLM.Q4_1.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_SecurityLLM-gguf/blob/main/SecurityLLM.Q4_1.gguf) | Q4_1 | 4.24GB | | [SecurityLLM.Q5_0.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_SecurityLLM-gguf/blob/main/SecurityLLM.Q5_0.gguf) | Q5_0 | 4.65GB | | [SecurityLLM.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_SecurityLLM-gguf/blob/main/SecurityLLM.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [SecurityLLM.Q5_K.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_SecurityLLM-gguf/blob/main/SecurityLLM.Q5_K.gguf) | Q5_K | 4.78GB | | [SecurityLLM.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_SecurityLLM-gguf/blob/main/SecurityLLM.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [SecurityLLM.Q5_1.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_SecurityLLM-gguf/blob/main/SecurityLLM.Q5_1.gguf) | Q5_1 | 5.07GB | | [SecurityLLM.Q6_K.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_SecurityLLM-gguf/blob/main/SecurityLLM.Q6_K.gguf) | Q6_K | 5.53GB | | [SecurityLLM.Q8_0.gguf](https://huggingface.co/RichardErkhov/ZySec-AI_-_SecurityLLM-gguf/blob/main/SecurityLLM.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- library_name: transformers license: apache-2.0 tags: - security - cybersecwithai - threat - vulnerability - infosec - zysec.ai - cyber security - ai4security - llmsecurity - cyber - malware analysis - exploitdev - ai4good - aisecurity - threat - cybersec - cybersecurity --- # ZySec-7B ZySec-7B, stands as a pivotal innovation for security professionals, leveraging the advanced capabilities of HuggingFace's Zephyr language model series. This AI model is crafted to be an omnipresent cybersecurity ally, offering on-demand, expert guidance in cybersecurity issues. Picture ZySec-7B as an ever-present digital teammate, adept at navigating the complexities of security challenges. The efficacy of ZySec-7B lies in its comprehensive training across numerous cybersecurity fields, providing a deep and wide-ranging understanding of the sector. ZySec is developed using the DPO technique, utilizing a varied dataset encompassing critical topics such as: - Sophisticated areas like Attack Surface Threats, Cloud Security, and the Cyber Kill Chain. - Key compliance and regulatory frameworks, including CIS Controls, FedRAMP, PCI DSS, and ISO/IEC 27001. - Practical aspects like Cloud Secure Migration, Data Exfiltration Techniques, and Security Incident Handling. - Crucial strategic fields such as Security Governance, Risk Management, and Security Architecture Review. ZySec-7B's training spans over 30 unique domains, each enriched with thousands of data points, delivering unparalleled expertise. As the first of its kind in an open-source, AI-driven cybersecurity series, ZySec-7B transcends the conventional role of a support tool, redefining organizational security approaches. Its open-source nature not only invites community contributions but also enhances its flexibility and transparency in managing vast cybersecurity data. ZySec-7B is instrumental in providing vital, actionable insights for strategic decision-making and advanced risk management. More than a mere software, ZySec-7B is a community-enhanced strategic tool, equipping your team to proactively confront and stay ahead of the dynamic landscape of cyber threats and regulatory demands. # For suggestions please use [Road Map](https://zysec-ai.productlift.dev/t/roadmap) <img src="https://huggingface.co/aihub-app/ZySec-7B-v1/resolve/main/ZySec-7B-dataset-composition.png?download=true" alt="Dataset Distribution" width="90%"/> Details of dataset distribution here - [Dataset Distribution](https://huggingface.co/aihub-app/ZySec-7B/resolve/main/ZySec-7B-dataset-composition.png?download=true) Fully compatible with [LM Studio](https://lmstudio.ai). Search for “Zysec” and here is what you get. Here is a sample output of ZySec writing email to John about database security using LM Studio: <img src="https://huggingface.co/aihub-app/ZySec-7B-v1/resolve/main/sample-output.png" alt="Sample Output" width="90%"/> --- The training is funded by [ZySec AI](https://www.zysec.app), the mobile app for Cyber Security professionals. Official GGUF version is hosted here - [ZySec-7B-v1-GGUF on HuggingFace](https://huggingface.co/aihub-app/ZySec-7B-v1-GGUF) ## [ZySec AI: Unleashing the Potential of the ZySec Series Model](https://github.com/ZySec-AI/ZySec) Project ZySec, an integral part of ZySec AI, stands at the forefront of integrating Artificial Intelligence into Cybersecurity. Centered around the innovative ZySec 7B model, it's designed to revolutionize the cybersecurity landscape with AI-driven solutions. ZySec AI isn't just a tool, it's a transformative approach, blending AI's cutting-edge capabilities with the unique intricacies of cybersecurity, while ensuring privacy and security. ### Discover the Key Features of Project ZySec - **AI-Driven Cybersecurity:** Tap into the power of the ZySec 7B model, a bespoke AI solution fine-tuned for cybersecurity. - **24/7 Expert Assistance:** Benefit from round-the-clock support and expert advice, guaranteeing smooth operations during any SOC shift. - **Efficient Playbook Access:** Streamline your workflow with quick and easy access to playbooks and documents, enhancing information retrieval. - **Standards Explorer:** Navigate various standards with ease, akin to a seasoned expert's proficiency. - **Ongoing Internet Research:** Leverage AI-enabled, thorough internet research for exhaustive insights. (Note: Internet use is optional and specific to this feature). ### About Project ZySec by ZySec AI ZySec AI an opensource project with a vision towards fusioning of Cybersecurity with Artificial Intelligence. Our goal is to transform the way security professionals engage with technology. More than a mere tool, ZySec AI symbolizes a comprehensive strategy to augment security operations, merging the innovative essence of AI with cybersecurity's distinctive challenges, always ensuring privacy and security. https://github.com/ZySec-AI/ZySec ### The ZySec Roadmap https://github.com/ZySec-AI/.github/blob/main/roadmap.md
nermineatif/models
nermineatif
2024-10-07T11:58:43Z
94
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-10-07T08:26:42Z
--- base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** nermineatif - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
choozmo/whisper-small-nan-tw
choozmo
2024-10-07T11:50:43Z
84
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "zh", "dataset:mozilla-foundation/common_voice_17_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-10-07T06:31:35Z
--- library_name: transformers language: - zh license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_17_0 metrics: - wer model-index: - name: Whisper Taiwanese Small3 - Steven Wang results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 17.0 type: mozilla-foundation/common_voice_17_0 config: nan-tw split: None args: 'config: zh, split: test' metrics: - name: Wer type: wer value: 90.87361803598526 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Taiwanese Small3 - Steven Wang This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 17.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.7512 - Wer: 90.8736 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-------:|:----:|:---------------:|:-------:| | 0.083 | 2.9240 | 1000 | 0.6382 | 93.2582 | | 0.0111 | 5.8480 | 2000 | 0.6895 | 91.6323 | | 0.0019 | 8.7719 | 3000 | 0.7383 | 91.1771 | | 0.0009 | 11.6959 | 4000 | 0.7512 | 90.8736 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.0
RichardErkhov/tanliboy_-_Meta-Llama-3.1-8B-Instruct-gguf
RichardErkhov
2024-10-07T11:49:59Z
57
0
null
[ "gguf", "arxiv:2204.05149", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-07T07:28:49Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Meta-Llama-3.1-8B-Instruct - GGUF - Model creator: https://huggingface.co/tanliboy/ - Original model: https://huggingface.co/tanliboy/Meta-Llama-3.1-8B-Instruct/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Meta-Llama-3.1-8B-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/tanliboy_-_Meta-Llama-3.1-8B-Instruct-gguf/blob/main/Meta-Llama-3.1-8B-Instruct.Q2_K.gguf) | Q2_K | 2.96GB | | [Meta-Llama-3.1-8B-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/tanliboy_-_Meta-Llama-3.1-8B-Instruct-gguf/blob/main/Meta-Llama-3.1-8B-Instruct.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [Meta-Llama-3.1-8B-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/tanliboy_-_Meta-Llama-3.1-8B-Instruct-gguf/blob/main/Meta-Llama-3.1-8B-Instruct.IQ3_S.gguf) | IQ3_S | 3.43GB | | [Meta-Llama-3.1-8B-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/tanliboy_-_Meta-Llama-3.1-8B-Instruct-gguf/blob/main/Meta-Llama-3.1-8B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [Meta-Llama-3.1-8B-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/tanliboy_-_Meta-Llama-3.1-8B-Instruct-gguf/blob/main/Meta-Llama-3.1-8B-Instruct.IQ3_M.gguf) | IQ3_M | 3.52GB | | [Meta-Llama-3.1-8B-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/tanliboy_-_Meta-Llama-3.1-8B-Instruct-gguf/blob/main/Meta-Llama-3.1-8B-Instruct.Q3_K.gguf) | Q3_K | 3.74GB | | [Meta-Llama-3.1-8B-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/tanliboy_-_Meta-Llama-3.1-8B-Instruct-gguf/blob/main/Meta-Llama-3.1-8B-Instruct.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [Meta-Llama-3.1-8B-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/tanliboy_-_Meta-Llama-3.1-8B-Instruct-gguf/blob/main/Meta-Llama-3.1-8B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [Meta-Llama-3.1-8B-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/tanliboy_-_Meta-Llama-3.1-8B-Instruct-gguf/blob/main/Meta-Llama-3.1-8B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [Meta-Llama-3.1-8B-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/tanliboy_-_Meta-Llama-3.1-8B-Instruct-gguf/blob/main/Meta-Llama-3.1-8B-Instruct.Q4_0.gguf) | Q4_0 | 4.34GB | | [Meta-Llama-3.1-8B-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/tanliboy_-_Meta-Llama-3.1-8B-Instruct-gguf/blob/main/Meta-Llama-3.1-8B-Instruct.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [Meta-Llama-3.1-8B-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/tanliboy_-_Meta-Llama-3.1-8B-Instruct-gguf/blob/main/Meta-Llama-3.1-8B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [Meta-Llama-3.1-8B-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/tanliboy_-_Meta-Llama-3.1-8B-Instruct-gguf/blob/main/Meta-Llama-3.1-8B-Instruct.Q4_K.gguf) | Q4_K | 4.58GB | | [Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/tanliboy_-_Meta-Llama-3.1-8B-Instruct-gguf/blob/main/Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [Meta-Llama-3.1-8B-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/tanliboy_-_Meta-Llama-3.1-8B-Instruct-gguf/blob/main/Meta-Llama-3.1-8B-Instruct.Q4_1.gguf) | Q4_1 | 4.78GB | | [Meta-Llama-3.1-8B-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/tanliboy_-_Meta-Llama-3.1-8B-Instruct-gguf/blob/main/Meta-Llama-3.1-8B-Instruct.Q5_0.gguf) | Q5_0 | 5.21GB | | [Meta-Llama-3.1-8B-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/tanliboy_-_Meta-Llama-3.1-8B-Instruct-gguf/blob/main/Meta-Llama-3.1-8B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [Meta-Llama-3.1-8B-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/tanliboy_-_Meta-Llama-3.1-8B-Instruct-gguf/blob/main/Meta-Llama-3.1-8B-Instruct.Q5_K.gguf) | Q5_K | 5.34GB | | [Meta-Llama-3.1-8B-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/tanliboy_-_Meta-Llama-3.1-8B-Instruct-gguf/blob/main/Meta-Llama-3.1-8B-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [Meta-Llama-3.1-8B-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/tanliboy_-_Meta-Llama-3.1-8B-Instruct-gguf/blob/main/Meta-Llama-3.1-8B-Instruct.Q5_1.gguf) | Q5_1 | 5.65GB | | [Meta-Llama-3.1-8B-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/tanliboy_-_Meta-Llama-3.1-8B-Instruct-gguf/blob/main/Meta-Llama-3.1-8B-Instruct.Q6_K.gguf) | Q6_K | 6.14GB | | [Meta-Llama-3.1-8B-Instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/tanliboy_-_Meta-Llama-3.1-8B-Instruct-gguf/blob/main/Meta-Llama-3.1-8B-Instruct.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- language: - en - de - fr - it - pt - hi - es - th license: llama3.1 base_model: meta-llama/Meta-Llama-3.1-8B pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 extra_gated_prompt: "### LLAMA 3.1 COMMUNITY LICENSE AGREEMENT\nLlama 3.1 Version\ \ Release Date: July 23, 2024\n\"Agreement\" means the terms and conditions for\ \ use, reproduction, distribution and modification of the Llama Materials set forth\ \ herein.\n\"Documentation\" means the specifications, manuals and documentation\ \ accompanying Llama 3.1 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \"Licensee\" or \"you\" means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf), of\ \ the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\"Llama 3.1\"\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means,\ \ collectively, Meta’s proprietary Llama 3.1 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you\ \ are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy,\ \ create derivative works of, and make modifications to the Llama Materials.\nb.\ \ Redistribution and Use.\ni. If you distribute or make available the Llama Materials\ \ (or any derivative works thereof), or a product or service (including another\ \ AI model) that contains any of them, you shall (A) provide a copy of this Agreement\ \ with any such Llama Materials; and (B) prominently display “Built with Llama”\ \ on a related website, user interface, blogpost, about page, or product documentation.\ \ If you use the Llama Materials or any outputs or results of the Llama Materials\ \ to create, train, fine tune, or otherwise improve an AI model, which is distributed\ \ or made available, you shall also include “Llama” at the beginning of any such\ \ AI model name.\nii. If you receive Llama Materials, or any derivative works thereof,\ \ from a Licensee as part of an integrated end user product, then Section 2 of\ \ this Agreement will not apply to you.\niii. You must retain in all copies of the\ \ Llama Materials that you distribute the following attribution notice within a\ \ “Notice” text file distributed as a part of such copies: “Llama 3.1 is licensed\ \ under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights\ \ Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws\ \ and regulations (including trade compliance laws and regulations) and adhere to\ \ the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3_1/use-policy),\ \ which is hereby incorporated by reference into this Agreement.\n2. Additional\ \ Commercial Terms. If, on the Llama 3.1 version release date, the monthly active\ \ users of the products or services made available by or for Licensee, or Licensee’s\ \ affiliates, is greater than 700 million monthly active users in the preceding\ \ calendar month, you must request a license from Meta, which Meta may grant to\ \ you in its sole discretion, and you are not authorized to exercise any of the\ \ rights under this Agreement unless or until Meta otherwise expressly grants you\ \ such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE\ \ LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS”\ \ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY\ \ KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\ \ OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.\ \ YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING\ \ THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA\ \ MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT\ \ WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN\ \ CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS\ \ AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,\ \ EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED\ \ OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No\ \ trademark licenses are granted under this Agreement, and in connection with the\ \ Llama Materials, neither Meta nor Licensee may use any name or mark owned by or\ \ associated with the other or any of its affiliates, except as required for reasonable\ \ and customary use in describing and redistributing the Llama Materials or as set\ \ forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the\ \ “Mark”) solely as required to comply with the last sentence of Section 1.b.i.\ \ You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/\ \ ). All goodwill arising out of your use of the Mark will inure to the benefit\ \ of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made\ \ by or for Meta, with respect to any derivative works and modifications of the\ \ Llama Materials that are made by you, as between you and Meta, you are and will\ \ be the owner of such derivative works and modifications.\nc. If you institute\ \ litigation or other proceedings against Meta or any entity (including a cross-claim\ \ or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.1 outputs\ \ or results, or any portion of any of the foregoing, constitutes infringement of\ \ intellectual property or other rights owned or licensable by you, then any licenses\ \ granted to you under this Agreement shall terminate as of the date such litigation\ \ or claim is filed or instituted. You will indemnify and hold harmless Meta from\ \ and against any claim by any third party arising out of or related to your use\ \ or distribution of the Llama Materials.\n6. Term and Termination. The term of\ \ this Agreement will commence upon your acceptance of this Agreement or access\ \ to the Llama Materials and will continue in full force and effect until terminated\ \ in accordance with the terms and conditions herein. Meta may terminate this Agreement\ \ if you are in breach of any term or condition of this Agreement. Upon termination\ \ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\ \ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\ \ and Jurisdiction. This Agreement will be governed and construed under the laws\ \ of the State of California without regard to choice of law principles, and the\ \ UN Convention on Contracts for the International Sale of Goods does not apply\ \ to this Agreement. The courts of California shall have exclusive jurisdiction\ \ of any dispute arising out of this Agreement.\n### Llama 3.1 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.1. If you access or use Llama 3.1, you agree to this Acceptable Use Policy\ \ (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy)\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.1 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.1 to:\n 1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 3. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 4. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 5.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 6. Collect, process, disclose, generate, or infer health, demographic,\ \ or other sensitive personal or private information about individuals without rights\ \ and consents required by applicable laws\n 7. Engage in or facilitate any action\ \ or generate any content that infringes, misappropriates, or otherwise violates\ \ any third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 8. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n2. Engage in, promote, incite,\ \ facilitate, or assist in the planning or development of activities that present\ \ a risk of death or bodily harm to individuals, including use of Llama 3.1 related\ \ to the following:\n 1. Military, warfare, nuclear industries or applications,\ \ espionage, use for materials or activities that are subject to the International\ \ Traffic Arms Regulations (ITAR) maintained by the United States Department of\ \ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\ \ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\ \ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\ \ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\ \ content intended to incite or promote violence, abuse, or any infliction of bodily\ \ harm to an individual\n3. Intentionally deceive or mislead others, including use\ \ of Llama 3.1 related to the following:\n 1. Generating, promoting, or furthering\ \ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\ \ or furthering defamatory content, including the creation of defamatory statements,\ \ images, or other content\n 3. Generating, promoting, or further distributing\ \ spam\n 4. Impersonating another individual without consent, authorization,\ \ or legal right\n 5. Representing that the use of Llama 3.1 or outputs are human-generated\n\ \ 6. Generating or facilitating false online engagement, including fake reviews\ \ and other means of fake online engagement\n4. Fail to appropriately disclose to\ \ end users any known dangers of your AI system\nPlease report any violation of\ \ this Policy, software “bug,” or other problems that could lead to a violation\ \ of this Policy through one of the following means:\n * Reporting issues with\ \ the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues)\n\ \ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\ \ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\ \ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## Model Information The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks. **Model developer**: Meta **Model Architecture:** Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Input modalities</strong> </td> <td><strong>Output modalities</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="3" >Llama 3.1 (text only) </td> <td rowspan="3" >A new mix of publicly available online data. </td> <td>8B </td> <td>Multilingual Text </td> <td>Multilingual Text and code </td> <td>128k </td> <td>Yes </td> <td rowspan="3" >15T+ </td> <td rowspan="3" >December 2023 </td> </tr> <tr> <td>70B </td> <td>Multilingual Text </td> <td>Multilingual Text and code </td> <td>128k </td> <td>Yes </td> </tr> <tr> <td>405B </td> <td>Multilingual Text </td> <td>Multilingual Text and code </td> <td>128k </td> <td>Yes </td> </tr> </table> **Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. **Llama 3.1 family of models**. Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** July 23, 2024. **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License:** A custom commercial license, the Llama 3.1 Community License, is available at: [https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3.1 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. The Llama 3.1 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.1 Community License allows for these use cases. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.1 Community License. Use in languages beyond those explicitly referenced as supported in this model card**. **<span style="text-decoration:underline;">Note</span>: Llama 3.1 has been trained on a broader collection of languages than the 8 supported languages. Developers may fine-tune Llama 3.1 models for languages beyond the 8 supported languages provided they comply with the Llama 3.1 Community License and the Acceptable Use Policy and in such cases are responsible for ensuring that any uses of Llama 3.1 in additional languages is done in a safe and responsible manner. ## How to use This repository contains two versions of Meta-Llama-3.1-8B-Instruct, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipeline( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes) ### Tool use with transformers LLaMA-3.1 supports multiple tool use formats. You can see a full guide to prompt formatting [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/). Tool use is also supported through [chat templates](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling) in Transformers. Here is a quick example showing a single simple tool: ```python # First, define a tool def get_current_temperature(location: str) -> float: """ Get the current temperature at a location. Args: location: The location to get the temperature for, in the format "City, Country" Returns: The current temperature at the specified location in the specified units, as a float. """ return 22. # A real function should probably actually get the temperature! # Next, create a chat and apply the chat template messages = [ {"role": "system", "content": "You are a bot that responds to weather queries."}, {"role": "user", "content": "Hey, what's the temperature in Paris right now?"} ] inputs = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True) ``` You can then generate text from this input as normal. If the model generates a tool call, you should add it to the chat like so: ```python tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France"}} messages.append({"role": "assistant", "tool_calls": [{"type": "function", "function": tool_call}]}) ``` and then call the tool and append the result, with the `tool` role, like so: ```python messages.append({"role": "tool", "name": "get_current_temperature", "content": "22.0"}) ``` After that, you can `generate()` again to let the model use the tool result in the chat. Note that this was a very brief introduction to tool calling - for more information, see the [LLaMA prompt format docs](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/) and the Transformers [tool use documentation](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling). ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3.1-8B-Instruct --include "original/*" --local-dir Meta-Llama-3.1-8B-Instruct ``` ## Hardware and Software **Training Factors** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training utilized a cumulative of** 39.3M GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. **Training Greenhouse Gas Emissions** Estimated total location-based greenhouse gas emissions were **11,390** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq. <table> <tr> <td> </td> <td><strong>Training Time (GPU hours)</strong> </td> <td><strong>Training Power Consumption (W)</strong> </td> <td><strong>Training Location-Based Greenhouse Gas Emissions</strong> <p> <strong>(tons CO2eq)</strong> </td> <td><strong>Training Market-Based Greenhouse Gas Emissions</strong> <p> <strong>(tons CO2eq)</strong> </td> </tr> <tr> <td>Llama 3.1 8B </td> <td>1.46M </td> <td>700 </td> <td>420 </td> <td>0 </td> </tr> <tr> <td>Llama 3.1 70B </td> <td>7.0M </td> <td>700 </td> <td>2,040 </td> <td>0 </td> </tr> <tr> <td>Llama 3.1 405B </td> <td>30.84M </td> <td>700 </td> <td>8,930 </td> <td>0 </td> </tr> <tr> <td>Total </td> <td>39.3M <td> <ul> </ul> </td> <td>11,390 </td> <td>0 </td> </tr> </table> The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.1 was pretrained on ~15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 25M synthetically generated examples. **Data Freshness:** The pretraining data has a cutoff of December 2023. ## Benchmark scores In this section, we report the results for Llama 3.1 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong># Shots</strong> </td> <td><strong>Metric</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 3.1 8B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 3.1 70B</strong> </td> <td><strong>Llama 3.1 405B</strong> </td> </tr> <tr> <td rowspan="7" >General </td> <td>MMLU </td> <td>5 </td> <td>macro_avg/acc_char </td> <td>66.7 </td> <td>66.7 </td> <td>79.5 </td> <td>79.3 </td> <td>85.2 </td> </tr> <tr> <td>MMLU-Pro (CoT) </td> <td>5 </td> <td>macro_avg/acc_char </td> <td>36.2 </td> <td>37.1 </td> <td>55.0 </td> <td>53.8 </td> <td>61.6 </td> </tr> <tr> <td>AGIEval English </td> <td>3-5 </td> <td>average/acc_char </td> <td>47.1 </td> <td>47.8 </td> <td>63.0 </td> <td>64.6 </td> <td>71.6 </td> </tr> <tr> <td>CommonSenseQA </td> <td>7 </td> <td>acc_char </td> <td>72.6 </td> <td>75.0 </td> <td>83.8 </td> <td>84.1 </td> <td>85.8 </td> </tr> <tr> <td>Winogrande </td> <td>5 </td> <td>acc_char </td> <td>- </td> <td>60.5 </td> <td>- </td> <td>83.3 </td> <td>86.7 </td> </tr> <tr> <td>BIG-Bench Hard (CoT) </td> <td>3 </td> <td>average/em </td> <td>61.1 </td> <td>64.2 </td> <td>81.3 </td> <td>81.6 </td> <td>85.9 </td> </tr> <tr> <td>ARC-Challenge </td> <td>25 </td> <td>acc_char </td> <td>79.4 </td> <td>79.7 </td> <td>93.1 </td> <td>92.9 </td> <td>96.1 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki </td> <td>5 </td> <td>em </td> <td>78.5 </td> <td>77.6 </td> <td>89.7 </td> <td>89.8 </td> <td>91.8 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD </td> <td>1 </td> <td>em </td> <td>76.4 </td> <td>77.0 </td> <td>85.6 </td> <td>81.8 </td> <td>89.3 </td> </tr> <tr> <td>QuAC (F1) </td> <td>1 </td> <td>f1 </td> <td>44.4 </td> <td>44.9 </td> <td>51.1 </td> <td>51.1 </td> <td>53.6 </td> </tr> <tr> <td>BoolQ </td> <td>0 </td> <td>acc_char </td> <td>75.7 </td> <td>75.0 </td> <td>79.0 </td> <td>79.4 </td> <td>80.0 </td> </tr> <tr> <td>DROP (F1) </td> <td>3 </td> <td>f1 </td> <td>58.4 </td> <td>59.5 </td> <td>79.7 </td> <td>79.6 </td> <td>84.8 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong># Shots</strong> </td> <td><strong>Metric</strong> </td> <td><strong>Llama 3 8B Instruct</strong> </td> <td><strong>Llama 3.1 8B Instruct</strong> </td> <td><strong>Llama 3 70B Instruct</strong> </td> <td><strong>Llama 3.1 70B Instruct</strong> </td> <td><strong>Llama 3.1 405B Instruct</strong> </td> </tr> <tr> <td rowspan="4" >General </td> <td>MMLU </td> <td>5 </td> <td>macro_avg/acc </td> <td>68.5 </td> <td>69.4 </td> <td>82.0 </td> <td>83.6 </td> <td>87.3 </td> </tr> <tr> <td>MMLU (CoT) </td> <td>0 </td> <td>macro_avg/acc </td> <td>65.3 </td> <td>73.0 </td> <td>80.9 </td> <td>86.0 </td> <td>88.6 </td> </tr> <tr> <td>MMLU-Pro (CoT) </td> <td>5 </td> <td>micro_avg/acc_char </td> <td>45.5 </td> <td>48.3 </td> <td>63.4 </td> <td>66.4 </td> <td>73.3 </td> </tr> <tr> <td>IFEval </td> <td> </td> <td> </td> <td>76.8 </td> <td>80.4 </td> <td>82.9 </td> <td>87.5 </td> <td>88.6 </td> </tr> <tr> <td rowspan="2" >Reasoning </td> <td>ARC-C </td> <td>0 </td> <td>acc </td> <td>82.4 </td> <td>83.4 </td> <td>94.4 </td> <td>94.8 </td> <td>96.9 </td> </tr> <tr> <td>GPQA </td> <td>0 </td> <td>em </td> <td>34.6 </td> <td>30.4 </td> <td>39.5 </td> <td>41.7 </td> <td>50.7 </td> </tr> <tr> <td rowspan="4" >Code </td> <td>HumanEval </td> <td>0 </td> <td>pass@1 </td> <td>60.4 </td> <td>72.6 </td> <td>81.7 </td> <td>80.5 </td> <td>89.0 </td> </tr> <tr> <td>MBPP ++ base version </td> <td>0 </td> <td>pass@1 </td> <td>70.6 </td> <td>72.8 </td> <td>82.5 </td> <td>86.0 </td> <td>88.6 </td> </tr> <tr> <td>Multipl-E HumanEval </td> <td>0 </td> <td>pass@1 </td> <td>- </td> <td>50.8 </td> <td>- </td> <td>65.5 </td> <td>75.2 </td> </tr> <tr> <td>Multipl-E MBPP </td> <td>0 </td> <td>pass@1 </td> <td>- </td> <td>52.4 </td> <td>- </td> <td>62.0 </td> <td>65.7 </td> </tr> <tr> <td rowspan="2" >Math </td> <td>GSM-8K (CoT) </td> <td>8 </td> <td>em_maj1@1 </td> <td>80.6 </td> <td>84.5 </td> <td>93.0 </td> <td>95.1 </td> <td>96.8 </td> </tr> <tr> <td>MATH (CoT) </td> <td>0 </td> <td>final_em </td> <td>29.1 </td> <td>51.9 </td> <td>51.0 </td> <td>68.0 </td> <td>73.8 </td> </tr> <tr> <td rowspan="4" >Tool Use </td> <td>API-Bank </td> <td>0 </td> <td>acc </td> <td>48.3 </td> <td>82.6 </td> <td>85.1 </td> <td>90.0 </td> <td>92.0 </td> </tr> <tr> <td>BFCL </td> <td>0 </td> <td>acc </td> <td>60.3 </td> <td>76.1 </td> <td>83.0 </td> <td>84.8 </td> <td>88.5 </td> </tr> <tr> <td>Gorilla Benchmark API Bench </td> <td>0 </td> <td>acc </td> <td>1.7 </td> <td>8.2 </td> <td>14.7 </td> <td>29.7 </td> <td>35.3 </td> </tr> <tr> <td>Nexus (0-shot) </td> <td>0 </td> <td>macro_avg/acc </td> <td>18.1 </td> <td>38.5 </td> <td>47.8 </td> <td>56.7 </td> <td>58.7 </td> </tr> <tr> <td>Multilingual </td> <td>Multilingual MGSM (CoT) </td> <td>0 </td> <td>em </td> <td>- </td> <td>68.9 </td> <td>- </td> <td>86.9 </td> <td>91.6 </td> </tr> </table> #### Multilingual benchmarks <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Language</strong> </td> <td><strong>Llama 3.1 8B</strong> </td> <td><strong>Llama 3.1 70B</strong> </td> <td><strong>Llama 3.1 405B</strong> </td> </tr> <tr> <td rowspan="9" ><strong>General</strong> </td> <td rowspan="9" ><strong>MMLU (5-shot, macro_avg/acc)</strong> </td> <td>Portuguese </td> <td>62.12 </td> <td>80.13 </td> <td>84.95 </td> </tr> <tr> <td>Spanish </td> <td>62.45 </td> <td>80.05 </td> <td>85.08 </td> </tr> <tr> <td>Italian </td> <td>61.63 </td> <td>80.4 </td> <td>85.04 </td> </tr> <tr> <td>German </td> <td>60.59 </td> <td>79.27 </td> <td>84.36 </td> </tr> <tr> <td>French </td> <td>62.34 </td> <td>79.82 </td> <td>84.66 </td> </tr> <tr> <td>Hindi </td> <td>50.88 </td> <td>74.52 </td> <td>80.31 </td> </tr> <tr> <td>Thai </td> <td>50.32 </td> <td>72.95 </td> <td>78.21 </td> </tr> </table> ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: * Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama. * Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm. * Provide protections for the community to help prevent the misuse of our models. ### Responsible deployment Llama is a foundational technology designed to be used in a variety of use cases, examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.1 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more. #### Llama 3.1 instruct Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. For more details on the safety mitigations implemented please read the Llama 3 paper. **Fine-tuning data** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.1 systems **Large language models, including Llama 3.1, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required.** Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard 3, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. #### New capabilities Note that this release introduces new capabilities, including a longer context window, multilingual inputs and outputs and possible integrations by developers with third party tools. Building with these new capabilities requires specific considerations in addition to the best practices that generally apply across all Generative AI use cases. **Tool-use**: Just like in standard software development, developers are responsible for the integration of the LLM with the tools and services of their choice. They should define a clear policy for their use case and assess the integrity of the third party services they use to be aware of the safety and security limitations when using this capability. Refer to the Responsible Use Guide for best practices on the safe deployment of the third party safeguards. **Multilinguality**: Llama 3.1 supports 7 languages in addition to English: French, German, Hindi, Italian, Portuguese, Spanish, and Thai. Llama may be able to output text in other languages than those that meet performance thresholds for safety and helpfulness. We strongly discourage developers from using this model to converse in non-supported languages without implementing finetuning and system controls in alignment with their policies and the best practices shared in the Responsible Use Guide. ### Evaluations We evaluated Llama models for common use cases as well as specific capabilities. Common use cases evaluations measure safety risks of systems for most commonly built applications including chat bot, coding assistant, tool calls. We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Llama Guard 3 to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. Prompt Guard and Code Shield are also available if relevant to the application. Capability evaluations measure vulnerabilities of Llama models inherent to specific capabilities, for which were crafted dedicated benchmarks including long context, multilingual, tools calls, coding or memorization. **Red teaming** For both scenarios, we conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical and other risks We specifically focused our efforts on mitigating the following critical risk areas: **1- CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness** To assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons. **2. Child Safety** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3. Cyber attack enablement** Our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Our study of Llama-3.1-405B’s social engineering uplift for cyber attackers was conducted to assess the effectiveness of AI models in aiding cyber threat actors in spear phishing campaigns. Please read our Llama 3.1 Cyber security whitepaper to learn more. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3.1 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.1 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3.1 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.1’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.1 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
interview-eval/zephyr-7b-math-case-7
interview-eval
2024-10-07T11:43:04Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "sft", "generated_from_trainer", "conversational", "dataset:EunsuKim/instruct", "dataset:EunsuKim/MATH", "base_model:alignment-handbook/zephyr-7b-sft-full", "base_model:finetune:alignment-handbook/zephyr-7b-sft-full", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-07T10:54:55Z
--- library_name: transformers license: apache-2.0 base_model: alignment-handbook/zephyr-7b-sft-full tags: - alignment-handbook - trl - sft - generated_from_trainer - trl - sft - generated_from_trainer datasets: - EunsuKim/instruct - EunsuKim/MATH model-index: - name: zephyr-7b-math-case-7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-math-case-7 This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the EunsuKim/instruct and the EunsuKim/MATH datasets. It achieves the following results on the evaluation set: - Loss: 0.0013 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 32 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.8719 | 1.0 | 18 | 0.5943 | | 0.4801 | 2.0 | 36 | 0.2952 | | 0.2452 | 3.0 | 54 | 0.1275 | | 0.098 | 4.0 | 72 | 0.0448 | | 0.0439 | 5.0 | 90 | 0.0259 | | 0.0306 | 6.0 | 108 | 0.0149 | | 0.017 | 7.0 | 126 | 0.0068 | | 0.006 | 8.0 | 144 | 0.0025 | | 0.0025 | 9.0 | 162 | 0.0014 | | 0.0012 | 10.0 | 180 | 0.0013 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.0 - Tokenizers 0.19.1
wanasash/whisper-large-v3-ec
wanasash
2024-10-07T11:32:14Z
6
0
null
[ "tensorboard", "safetensors", "whisper", "generated_from_trainer", "cy", "dataset:wanasash/enwaucymraeg", "base_model:openai/whisper-large-v3", "base_model:finetune:openai/whisper-large-v3", "license:apache-2.0", "model-index", "region:us" ]
null
2024-10-06T07:50:59Z
--- license: apache-2.0 base_model: openai/whisper-large-v3 tags: - generated_from_trainer datasets: - wanasash/enwaucymraeg metrics: - wer model-index: - name: whisper-large-v3-ec results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: wanasash/enwaucymraeg default type: wanasash/enwaucymraeg args: default metrics: - name: Wer type: wer value: 0.21372622155911974 language: - cy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-large-v3-ec This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the wanasash/enwaucymraeg default dataset. It achieves the following results on the evaluation set: - Loss: 0.4733 - Wer: 0.2137 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-------:|:----:|:---------------:|:------:| | 0.0091 | 13.6054 | 1000 | 0.4027 | 0.2189 | | 0.0026 | 27.2109 | 2000 | 0.4282 | 0.2260 | | 0.0002 | 40.8163 | 3000 | 0.4444 | 0.2178 | | 0.0001 | 54.4218 | 4000 | 0.4667 | 0.2160 | | 0.0001 | 68.0272 | 5000 | 0.4733 | 0.2137 | ### Framework versions - Transformers 4.44.0 - Pytorch 2.4.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
Swekerr/ReasoningGemma2-2B-GGUF
Swekerr
2024-10-07T11:26:37Z
61
0
transformers
[ "transformers", "gguf", "gemma2", "text-generation-inference", "unsloth", "en", "dataset:KingNish/reasoning-base-20k", "base_model:unsloth/gemma-2-2b-bnb-4bit", "base_model:quantized:unsloth/gemma-2-2b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-10-07T11:18:49Z
--- base_model: unsloth/gemma-2-2b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma2 - gguf datasets: - KingNish/reasoning-base-20k --- # Uploaded model - **Developed by:** Swekerr - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2-2b-bnb-4bit This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
QuantFactory/TwinLlama-3.1-8B-DPO-GGUF
QuantFactory
2024-10-07T11:23:13Z
55
3
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "dpo", "en", "base_model:mlabonne/TwinLlama-3.1-8B", "base_model:quantized:mlabonne/TwinLlama-3.1-8B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-09-01T05:44:17Z
--- base_model: mlabonne/TwinLlama-3.1-8B language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - dpo --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/TwinLlama-3.1-8B-DPO-GGUF This is quantized version of [mlabonne/TwinLlama-3.1-8B-DPO](https://huggingface.co/mlabonne/TwinLlama-3.1-8B-DPO) created using llama.cpp # Original Model Card # Uploaded model - **Developed by:** mlabonne - **License:** apache-2.0 - **Finetuned from model :** mlabonne/TwinLlama-3.1-8B This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Sripradha/marian-finetuned-kde4-en-to-fr
Sripradha
2024-10-07T11:20:39Z
103
0
transformers
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "base_model:Helsinki-NLP/opus-mt-en-fr", "base_model:finetune:Helsinki-NLP/opus-mt-en-fr", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-10-07T06:00:41Z
--- library_name: transformers license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-fr tags: - translation - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-fr results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 config: en-fr split: train args: en-fr metrics: - name: Bleu type: bleu value: 52.90371209759549 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.8554 - Bleu: 52.9037 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
bartowski/Human-Like-Mistral-Nemo-Instruct-2407-GGUF
bartowski
2024-10-07T11:17:12Z
361
1
null
[ "gguf", "axolotl", "dpo", "trl", "generated_from_trainer", "text-generation", "base_model:HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407", "base_model:quantized:HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-10-07T10:43:13Z
--- base_model: HumanLLMs/Humanish-Mistral-Nemo-Instruct-2407 license: apache-2.0 pipeline_tag: text-generation tags: - axolotl - dpo - trl - generated_from_trainer quantized_by: bartowski model-index: - name: Humanish-Mistral-Nemo-Instruct-2407 results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 54.51 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=HumanLLMs/Humanish-Mistral-Nemo-Instruct-2407 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 32.71 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=HumanLLMs/Humanish-Mistral-Nemo-Instruct-2407 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 7.63 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=HumanLLMs/Humanish-Mistral-Nemo-Instruct-2407 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 5.03 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=HumanLLMs/Humanish-Mistral-Nemo-Instruct-2407 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 9.4 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=HumanLLMs/Humanish-Mistral-Nemo-Instruct-2407 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 28.01 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=HumanLLMs/Humanish-Mistral-Nemo-Instruct-2407 name: Open LLM Leaderboard --- ## Llamacpp imatrix Quantizations of Humanish-Mistral-Nemo-Instruct-2407 Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3878">b3878</a> for quantization. Original model: https://huggingface.co/HumanLLMs/Humanish-Mistral-Nemo-Instruct-2407 All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) Run them in [LM Studio](https://lmstudio.ai/) ## Prompt format No prompt format found, check original model page ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | Description | | -------- | ---------- | --------- | ----- | ----------- | | [Humanish-Mistral-Nemo-Instruct-2407-f16.gguf](https://huggingface.co/bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF/blob/main/Humanish-Mistral-Nemo-Instruct-2407-f16.gguf) | f16 | 24.50GB | false | Full F16 weights. | | [Humanish-Mistral-Nemo-Instruct-2407-Q8_0.gguf](https://huggingface.co/bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF/blob/main/Humanish-Mistral-Nemo-Instruct-2407-Q8_0.gguf) | Q8_0 | 13.02GB | false | Extremely high quality, generally unneeded but max available quant. | | [Humanish-Mistral-Nemo-Instruct-2407-Q6_K_L.gguf](https://huggingface.co/bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF/blob/main/Humanish-Mistral-Nemo-Instruct-2407-Q6_K_L.gguf) | Q6_K_L | 10.38GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. | | [Humanish-Mistral-Nemo-Instruct-2407-Q6_K.gguf](https://huggingface.co/bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF/blob/main/Humanish-Mistral-Nemo-Instruct-2407-Q6_K.gguf) | Q6_K | 10.06GB | false | Very high quality, near perfect, *recommended*. | | [Humanish-Mistral-Nemo-Instruct-2407-Q5_K_L.gguf](https://huggingface.co/bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF/blob/main/Humanish-Mistral-Nemo-Instruct-2407-Q5_K_L.gguf) | Q5_K_L | 9.14GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. | | [Humanish-Mistral-Nemo-Instruct-2407-Q5_K_M.gguf](https://huggingface.co/bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF/blob/main/Humanish-Mistral-Nemo-Instruct-2407-Q5_K_M.gguf) | Q5_K_M | 8.73GB | false | High quality, *recommended*. | | [Humanish-Mistral-Nemo-Instruct-2407-Q5_K_S.gguf](https://huggingface.co/bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF/blob/main/Humanish-Mistral-Nemo-Instruct-2407-Q5_K_S.gguf) | Q5_K_S | 8.52GB | false | High quality, *recommended*. | | [Humanish-Mistral-Nemo-Instruct-2407-Q4_K_L.gguf](https://huggingface.co/bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF/blob/main/Humanish-Mistral-Nemo-Instruct-2407-Q4_K_L.gguf) | Q4_K_L | 7.98GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. | | [Humanish-Mistral-Nemo-Instruct-2407-Q4_K_M.gguf](https://huggingface.co/bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF/blob/main/Humanish-Mistral-Nemo-Instruct-2407-Q4_K_M.gguf) | Q4_K_M | 7.48GB | false | Good quality, default size for must use cases, *recommended*. | | [Humanish-Mistral-Nemo-Instruct-2407-Q3_K_XL.gguf](https://huggingface.co/bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF/blob/main/Humanish-Mistral-Nemo-Instruct-2407-Q3_K_XL.gguf) | Q3_K_XL | 7.15GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. | | [Humanish-Mistral-Nemo-Instruct-2407-Q4_K_S.gguf](https://huggingface.co/bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF/blob/main/Humanish-Mistral-Nemo-Instruct-2407-Q4_K_S.gguf) | Q4_K_S | 7.12GB | false | Slightly lower quality with more space savings, *recommended*. | | [Humanish-Mistral-Nemo-Instruct-2407-Q4_0.gguf](https://huggingface.co/bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF/blob/main/Humanish-Mistral-Nemo-Instruct-2407-Q4_0.gguf) | Q4_0 | 7.09GB | false | Legacy format, generally not worth using over similarly sized formats | | [Humanish-Mistral-Nemo-Instruct-2407-Q4_0_8_8.gguf](https://huggingface.co/bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF/blob/main/Humanish-Mistral-Nemo-Instruct-2407-Q4_0_8_8.gguf) | Q4_0_8_8 | 7.07GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). | | [Humanish-Mistral-Nemo-Instruct-2407-Q4_0_4_8.gguf](https://huggingface.co/bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF/blob/main/Humanish-Mistral-Nemo-Instruct-2407-Q4_0_4_8.gguf) | Q4_0_4_8 | 7.07GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). | | [Humanish-Mistral-Nemo-Instruct-2407-Q4_0_4_4.gguf](https://huggingface.co/bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF/blob/main/Humanish-Mistral-Nemo-Instruct-2407-Q4_0_4_4.gguf) | Q4_0_4_4 | 7.07GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. | | [Humanish-Mistral-Nemo-Instruct-2407-IQ4_XS.gguf](https://huggingface.co/bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF/blob/main/Humanish-Mistral-Nemo-Instruct-2407-IQ4_XS.gguf) | IQ4_XS | 6.74GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Humanish-Mistral-Nemo-Instruct-2407-Q3_K_L.gguf](https://huggingface.co/bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF/blob/main/Humanish-Mistral-Nemo-Instruct-2407-Q3_K_L.gguf) | Q3_K_L | 6.56GB | false | Lower quality but usable, good for low RAM availability. | | [Humanish-Mistral-Nemo-Instruct-2407-Q3_K_M.gguf](https://huggingface.co/bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF/blob/main/Humanish-Mistral-Nemo-Instruct-2407-Q3_K_M.gguf) | Q3_K_M | 6.08GB | false | Low quality. | | [Humanish-Mistral-Nemo-Instruct-2407-IQ3_M.gguf](https://huggingface.co/bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF/blob/main/Humanish-Mistral-Nemo-Instruct-2407-IQ3_M.gguf) | IQ3_M | 5.72GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Humanish-Mistral-Nemo-Instruct-2407-Q3_K_S.gguf](https://huggingface.co/bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF/blob/main/Humanish-Mistral-Nemo-Instruct-2407-Q3_K_S.gguf) | Q3_K_S | 5.53GB | false | Low quality, not recommended. | | [Humanish-Mistral-Nemo-Instruct-2407-Q2_K_L.gguf](https://huggingface.co/bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF/blob/main/Humanish-Mistral-Nemo-Instruct-2407-Q2_K_L.gguf) | Q2_K_L | 5.45GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. | | [Humanish-Mistral-Nemo-Instruct-2407-IQ3_XS.gguf](https://huggingface.co/bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF/blob/main/Humanish-Mistral-Nemo-Instruct-2407-IQ3_XS.gguf) | IQ3_XS | 5.31GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Humanish-Mistral-Nemo-Instruct-2407-Q2_K.gguf](https://huggingface.co/bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF/blob/main/Humanish-Mistral-Nemo-Instruct-2407-Q2_K.gguf) | Q2_K | 4.79GB | false | Very low quality but surprisingly usable. | | [Humanish-Mistral-Nemo-Instruct-2407-IQ2_M.gguf](https://huggingface.co/bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF/blob/main/Humanish-Mistral-Nemo-Instruct-2407-IQ2_M.gguf) | IQ2_M | 4.44GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. | | [Humanish-Mistral-Nemo-Instruct-2407-IQ2_S.gguf](https://huggingface.co/bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF/blob/main/Humanish-Mistral-Nemo-Instruct-2407-IQ2_S.gguf) | IQ2_S | 4.14GB | false | Low quality, uses SOTA techniques to be usable. | ## Embed/output weights Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to. Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using. Thanks! ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF --include "Humanish-Mistral-Nemo-Instruct-2407-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/Humanish-Mistral-Nemo-Instruct-2407-GGUF --include "Humanish-Mistral-Nemo-Instruct-2407-Q8_0/*" --local-dir ./ ``` You can either specify a new local-dir (Humanish-Mistral-Nemo-Instruct-2407-Q8_0) or download them all in place (./) ## Q4_0_X_X These are *NOT* for Metal (Apple) offloading, only ARM chips. If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660) To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!). ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. ## Credits Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset Thank you ZeroWw for the inspiration to experiment with embed/output Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
ishaq101/llama3-8b-finetune-4bit-lora
ishaq101
2024-10-07T11:00:24Z
76
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "base_model:ishaq101/llama3-8b-finetune-4bit", "base_model:quantized:ishaq101/llama3-8b-finetune-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-10-07T10:58:53Z
--- base_model: ishaq101/llama3-8b-finetune-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** ishaq101 - **License:** apache-2.0 - **Finetuned from model :** ishaq101/llama3-8b-finetune-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
rombodawg/Rombos-LLM-V2.5-Qwen-72b
rombodawg
2024-10-07T10:58:38Z
3,193
35
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "base_model:Qwen/Qwen2.5-72B-Instruct", "base_model:finetune:Qwen/Qwen2.5-72B-Instruct", "license:other", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-30T01:53:14Z
--- license: other library_name: transformers base_model: - Qwen/Qwen2.5-72B-Instruct license_name: qwen license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE model-index: - name: Replete-LLM-V2.5-Qwen-72b_Duplicated results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 71.55 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=rombodawg/Replete-LLM-V2.5-Qwen-72b_Duplicated name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 61.27 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=rombodawg/Replete-LLM-V2.5-Qwen-72b_Duplicated name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 47.58 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=rombodawg/Replete-LLM-V2.5-Qwen-72b_Duplicated name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 19.8 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=rombodawg/Replete-LLM-V2.5-Qwen-72b_Duplicated name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 17.32 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=rombodawg/Replete-LLM-V2.5-Qwen-72b_Duplicated name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 54.83 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=rombodawg/Replete-LLM-V2.5-Qwen-72b_Duplicated name: Open LLM Leaderboard --- # Rombos-LLM-V2.5-Qwen-72b ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/wp9qOi2K2WGzkey0I3SgH.jpeg) Rombos-LLM-V2.5-Qwen-72b is a continues finetuned version of Qwen2.5-72B. I noticed recently that the Qwen team did not learn from my methods of continuous finetuning, the great benefits, and no downsides of it. So I took it upon myself to merge the instruct model with the base model myself using the *Ties* merge method This version of the model shows higher performance than the original instruct and base models. Quants: (Coming soon) GGUF: https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-72b-GGUF EXL2: Benchmarks: (Coming soon) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_rombodawg__Replete-LLM-V2.5-Qwen-72b_Duplicated) | Metric |Value| |-------------------|----:| |Avg. |45.39| |IFEval (0-Shot) |71.55| |BBH (3-Shot) |61.27| |MATH Lvl 5 (4-Shot)|47.58| |GPQA (0-shot) |19.80| |MuSR (0-shot) |17.32| |MMLU-PRO (5-shot) |54.83|
shahxeebhassan/bert_base_ai_content_detector
shahxeebhassan
2024-10-07T10:47:46Z
292
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "dataset:shahxeebhassan/human_vs_ai_sentences", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-07T09:45:55Z
--- license: mit metrics: - accuracy base_model: - google-bert/bert-base-uncased datasets: - shahxeebhassan/human_vs_ai_sentences pipeline_tag: text-classification library_name: transformers --- ## Model Description This model is a fine-tuned BERT model for AI content detection. ## Training Data The model was trained on a [<span style="color: blue;">dataset</span> ](https://huggingface.co/datasets/shahxeebhassan/human_vs_ai_sentences) of over 100,000 sentences, each labeled as either AI-generated or human-written. This approach allows the model to predict the nature of each individual sentence, which is particularly useful for highlighting AI-written content within larger texts. ## Evaluation Metrics The model achieved an accuracy of 90% on the validation & test set. ## Usage ```python import torch from transformers import BertTokenizer, BertForSequenceClassification tokenizer = BertTokenizer.from_pretrained("shahxeebhassan/bert_base_ai_content_detector") model = BertForSequenceClassification.from_pretrained("shahxeebhassan/bert_base_ai_content_detector") inputs = tokenizer("Distance learning will not benefit students because the students are not able to develop as good of a relationship with their teachers.", return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits probabilities = torch.softmax(logits, dim=1).cpu().numpy() predicted_label = probabilities.argmax(axis=1) print(f"Predicted label for the input text: {predicted_label[0]}")
Kort/s3
Kort
2024-10-07T10:41:46Z
33
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-07T08:52:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dhruvindia/GPT2-TVC-prompt
dhruvindia
2024-10-07T10:38:01Z
151
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-07T10:35:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Kort/s2
Kort
2024-10-07T10:31:15Z
33
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-07T08:41:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Xbhi112358/my-fine-tuned-model_4
Xbhi112358
2024-10-07T10:29:30Z
75
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/Meta-Llama-3.1-8B-bnb-4bit", "base_model:quantized:unsloth/Meta-Llama-3.1-8B-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-10-07T10:12:51Z
--- base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- # Uploaded model - **Developed by:** Xbhi112358 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
bartowski/Human-Like-LLama3-8B-Instruct-GGUF
bartowski
2024-10-07T10:29:28Z
1,287
2
null
[ "gguf", "axolotl", "dpo", "trl", "generated_from_trainer", "text-generation", "dataset:HumanLLMs/humanish-dpo-project", "base_model:HumanLLMs/Human-Like-LLama3-8B-Instruct", "base_model:quantized:HumanLLMs/Human-Like-LLama3-8B-Instruct", "license:llama3", "model-index", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-10-07T10:07:53Z
--- base_model: HumanLLMs/Humanish-LLama3-8B-Instruct datasets: - HumanLLMs/humanish-dpo-project license: llama3 pipeline_tag: text-generation tags: - axolotl - dpo - trl - generated_from_trainer quantized_by: bartowski model-index: - name: Humanish-LLama3.1-8B-Instruct results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 64.98 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=HumanLLMs/Humanish-LLama3.1-8B-Instruct name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 28.01 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=HumanLLMs/Humanish-LLama3.1-8B-Instruct name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 8.46 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=HumanLLMs/Humanish-LLama3.1-8B-Instruct name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 0.78 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=HumanLLMs/Humanish-LLama3.1-8B-Instruct name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 2.0 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=HumanLLMs/Humanish-LLama3.1-8B-Instruct name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 30.02 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=HumanLLMs/Humanish-LLama3.1-8B-Instruct name: Open LLM Leaderboard --- ## Llamacpp imatrix Quantizations of Humanish-LLama3-8B-Instruct Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3878">b3878</a> for quantization. Original model: https://huggingface.co/HumanLLMs/Humanish-LLama3-8B-Instruct All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) Run them in [LM Studio](https://lmstudio.ai/) ## Prompt format No prompt format found, check original model page ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | Description | | -------- | ---------- | --------- | ----- | ----------- | | [Humanish-LLama3-8B-Instruct-f16.gguf](https://huggingface.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF/blob/main/Humanish-LLama3-8B-Instruct-f16.gguf) | f16 | 16.07GB | false | Full F16 weights. | | [Humanish-LLama3-8B-Instruct-Q8_0.gguf](https://huggingface.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF/blob/main/Humanish-LLama3-8B-Instruct-Q8_0.gguf) | Q8_0 | 8.54GB | false | Extremely high quality, generally unneeded but max available quant. | | [Humanish-LLama3-8B-Instruct-Q6_K_L.gguf](https://huggingface.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF/blob/main/Humanish-LLama3-8B-Instruct-Q6_K_L.gguf) | Q6_K_L | 6.85GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. | | [Humanish-LLama3-8B-Instruct-Q6_K.gguf](https://huggingface.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF/blob/main/Humanish-LLama3-8B-Instruct-Q6_K.gguf) | Q6_K | 6.60GB | false | Very high quality, near perfect, *recommended*. | | [Humanish-LLama3-8B-Instruct-Q5_K_L.gguf](https://huggingface.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF/blob/main/Humanish-LLama3-8B-Instruct-Q5_K_L.gguf) | Q5_K_L | 6.06GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. | | [Humanish-LLama3-8B-Instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF/blob/main/Humanish-LLama3-8B-Instruct-Q5_K_M.gguf) | Q5_K_M | 5.73GB | false | High quality, *recommended*. | | [Humanish-LLama3-8B-Instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF/blob/main/Humanish-LLama3-8B-Instruct-Q5_K_S.gguf) | Q5_K_S | 5.60GB | false | High quality, *recommended*. | | [Humanish-LLama3-8B-Instruct-Q4_K_L.gguf](https://huggingface.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF/blob/main/Humanish-LLama3-8B-Instruct-Q4_K_L.gguf) | Q4_K_L | 5.31GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. | | [Humanish-LLama3-8B-Instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF/blob/main/Humanish-LLama3-8B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4.92GB | false | Good quality, default size for must use cases, *recommended*. | | [Humanish-LLama3-8B-Instruct-Q3_K_XL.gguf](https://huggingface.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF/blob/main/Humanish-LLama3-8B-Instruct-Q3_K_XL.gguf) | Q3_K_XL | 4.78GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. | | [Humanish-LLama3-8B-Instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF/blob/main/Humanish-LLama3-8B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4.69GB | false | Slightly lower quality with more space savings, *recommended*. | | [Humanish-LLama3-8B-Instruct-Q4_0.gguf](https://huggingface.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF/blob/main/Humanish-LLama3-8B-Instruct-Q4_0.gguf) | Q4_0 | 4.68GB | false | Legacy format, generally not worth using over similarly sized formats | | [Humanish-LLama3-8B-Instruct-Q4_0_8_8.gguf](https://huggingface.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF/blob/main/Humanish-LLama3-8B-Instruct-Q4_0_8_8.gguf) | Q4_0_8_8 | 4.66GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). | | [Humanish-LLama3-8B-Instruct-Q4_0_4_8.gguf](https://huggingface.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF/blob/main/Humanish-LLama3-8B-Instruct-Q4_0_4_8.gguf) | Q4_0_4_8 | 4.66GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). | | [Humanish-LLama3-8B-Instruct-Q4_0_4_4.gguf](https://huggingface.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF/blob/main/Humanish-LLama3-8B-Instruct-Q4_0_4_4.gguf) | Q4_0_4_4 | 4.66GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. | | [Humanish-LLama3-8B-Instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF/blob/main/Humanish-LLama3-8B-Instruct-IQ4_XS.gguf) | IQ4_XS | 4.45GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Humanish-LLama3-8B-Instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF/blob/main/Humanish-LLama3-8B-Instruct-Q3_K_L.gguf) | Q3_K_L | 4.32GB | false | Lower quality but usable, good for low RAM availability. | | [Humanish-LLama3-8B-Instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF/blob/main/Humanish-LLama3-8B-Instruct-Q3_K_M.gguf) | Q3_K_M | 4.02GB | false | Low quality. | | [Humanish-LLama3-8B-Instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF/blob/main/Humanish-LLama3-8B-Instruct-IQ3_M.gguf) | IQ3_M | 3.78GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Humanish-LLama3-8B-Instruct-Q2_K_L.gguf](https://huggingface.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF/blob/main/Humanish-LLama3-8B-Instruct-Q2_K_L.gguf) | Q2_K_L | 3.69GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. | | [Humanish-LLama3-8B-Instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF/blob/main/Humanish-LLama3-8B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3.66GB | false | Low quality, not recommended. | | [Humanish-LLama3-8B-Instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF/blob/main/Humanish-LLama3-8B-Instruct-IQ3_XS.gguf) | IQ3_XS | 3.52GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Humanish-LLama3-8B-Instruct-Q2_K.gguf](https://huggingface.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF/blob/main/Humanish-LLama3-8B-Instruct-Q2_K.gguf) | Q2_K | 3.18GB | false | Very low quality but surprisingly usable. | | [Humanish-LLama3-8B-Instruct-IQ2_M.gguf](https://huggingface.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF/blob/main/Humanish-LLama3-8B-Instruct-IQ2_M.gguf) | IQ2_M | 2.95GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. | ## Embed/output weights Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to. Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using. Thanks! ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/Humanish-LLama3-8B-Instruct-GGUF --include "Humanish-LLama3-8B-Instruct-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/Humanish-LLama3-8B-Instruct-GGUF --include "Humanish-LLama3-8B-Instruct-Q8_0/*" --local-dir ./ ``` You can either specify a new local-dir (Humanish-LLama3-8B-Instruct-Q8_0) or download them all in place (./) ## Q4_0_X_X These are *NOT* for Metal (Apple) offloading, only ARM chips. If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660) To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!). ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. ## Credits Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset Thank you ZeroWw for the inspiration to experiment with embed/output Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
jorgegoco/bert-fine-tuned-cola
jorgegoco
2024-10-07T10:04:32Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-07T09:24:22Z
--- base_model: bert-base-cased library_name: transformers license: apache-2.0 metrics: - matthews_correlation tags: - generated_from_trainer model-index: - name: bert-fine-tuned-cola results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-fine-tuned-cola This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8212 - Matthews Correlation: 0.5782 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.4452 | 1.0 | 1069 | 0.4682 | 0.5208 | | 0.3216 | 2.0 | 2138 | 0.6613 | 0.5620 | | 0.1795 | 3.0 | 3207 | 0.8212 | 0.5782 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
motheecreator/ViT-GPT2-Image-Captioning
motheecreator
2024-10-07T10:03:01Z
148
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "image-to-text", "base_model:motheecreator/ViT-GPT2-Image_Captioning_model", "base_model:finetune:motheecreator/ViT-GPT2-Image_Captioning_model", "endpoints_compatible", "region:us" ]
image-to-text
2024-09-30T21:35:55Z
--- library_name: transformers base_model: motheecreator/ViT-GPT2-Image_Captioning_model tags: - generated_from_trainer - image-to-text metrics: - bleu model-index: - name: ViT-GPT2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT-GPT2 This model is a fine-tuned version of [motheecreator/ViT-GPT2-Image_Captioning_model](https://huggingface.co/motheecreator/ViT-GPT2-Image_Captioning_model) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.125337 - Rouge2 Precision: None - Rouge2 Recall: None - Rouge2 Fmeasure: 0.155 - Bleu: 9.7054 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | Bleu | |:-------------:|:------:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|:------:| | 2.1537 | 0.9993 | 1171 | 2.13666 | None | None | 0.1531 | 9.4673 | | 2.0434 | 1.9985 | 2342 | 2.125337 | None | None | 0.155 | 9.7054 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0 - Datasets 3.0.0 - Tokenizers 0.19.1
professorf/SmolLM-1.7B-Instruct-f16-gguf
professorf
2024-10-07T10:02:58Z
8
0
transformers
[ "transformers", "gguf", "en", "dataset:HuggingFaceTB/smollm-corpus", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-07T09:37:28Z
--- library_name: transformers license: apache-2.0 language: - en datasets: - HuggingFaceTB/smollm-corpus --- <!-- header start --> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">ProfessorF is Nick V. Flor, PhD<br>Models quantized for research reproducibility purposes</p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # SmolLM <center> <img src="https://huggingface.co/datasets/HuggingFaceTB/images/resolve/main/banner_smol.png" alt="SmolLM" width="1100" height="600"> </center> ## Table of Contents 1. [Model Summary](##model-summary) 2. [Limitations](##limitations) 3. [Training](##training) 4. [License](##license) 5. [Citation](##citation) ## Model Summary SmolLM is a series of state-of-the-art small language models available in three sizes: 135M, 360M, and 1.7B parameters. These models are built on Cosmo-Corpus, a meticulously curated high-quality training dataset. Cosmo-Corpus includes Cosmopedia v2 (28B tokens of synthetic textbooks and stories generated by Mixtral), Python-Edu (4B tokens of educational Python samples from The Stack), and FineWeb-Edu (220B tokens of deduplicated educational web samples from FineWeb). SmolLM models have shown promising results when compared to other models in their size categories across various benchmarks testing common sense reasoning and world knowledge. For detailed information on training, benchmarks and performance, please refer to our full [blog post](https://huggingface.co/blog/smollm). This is SmolLM-1.7B ### Generation ```bash pip install transformers ``` #### Running the model on CPU/GPU/multi GPU * _Using full precision_ ```python # pip install transformers from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "HuggingFaceTB/SmolLM-1.7B" device = "cuda" # for GPU usage or "cpu" for CPU usage tokenizer = AutoTokenizer.from_pretrained(checkpoint) # for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")` model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device) inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device) outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` * _Using `torch.bfloat16`_ ```python # pip install accelerate import torch from transformers import AutoTokenizer, AutoModelForCausalLM checkpoint = "HuggingFaceTB/SmolLM-1.7B" tokenizer = AutoTokenizer.from_pretrained(checkpoint) # for fp16 use `torch_dtype=torch.float16` instead model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", torch_dtype=torch.bfloat16) inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` ```bash >>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB") Memory footprint: 3422.76 MB ``` #### Quantized Versions through `bitsandbytes` * _Using 8-bit precision (int8)_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig # to use 4bit use `load_in_4bit=True` instead quantization_config = BitsAndBytesConfig(load_in_8bit=True) checkpoint = "HuggingFaceTB/SmolLM-1.7B" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint, quantization_config=quantization_config) inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` ```bash >>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB") # load_in_8bit Memory footprint: 1812.14 MB # load_in_4bit >>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB") Memory footprint: 1006.84 MB ``` # Limitations While SmolLM models have been trained on a diverse dataset including educational content and synthetic texts, they have limitations. The models primarily understand and generate content in English. They can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content. For a more comprehensive discussion of the models' capabilities and limitations, please refer to our full [blog post](https://huggingface.co/blog/smollm). This repository contains a converted version of our latest trained model. We've noticed a small performance difference between this converted checkpoint (transformers) and the original (nanotron). We're currently working to resolve this issue. # Training ## Model - **Architecture:** For architecture detail, see the [blog post](https://huggingface.co/blog/smollm). - **Pretraining steps:** 500k - **Pretraining tokens:** 1T - **Precision:** bfloat16 ## Hardware - **GPUs:** 64 H100 ## Software - **Training Framework:** [Nanotron](https://github.com/huggingface/nanotron/tree/main) # License [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) # Citation ```bash @misc{allal2024SmolLM, title={SmolLM - blazingly fast and remarkably powerful}, author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Leandro von Werra and Thomas Wolf}, year={2024}, } ```
ylacombe/multi
ylacombe
2024-10-07T09:59:41Z
53
0
transformers
[ "transformers", "safetensors", "parler_tts", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-10-07T09:58:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
den123/Caricature-flux
den123
2024-10-07T09:52:45Z
40
1
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:apache-2.0", "region:us" ]
text-to-image
2024-10-07T07:31:00Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/655e1034-4eda-4866-b27f-b91744353c39.png - text: Angelina Jolie caricature, pen drawing, bw output: url: images/example_7cb04vnuy.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: Caricature license: apache-2.0 --- # Caricature-flux <Gallery /> ## Model description Caricature drawing style ## Trigger words You should use `Caricature` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/den123/Caricature-flux/tree/main) them in the Files & versions tab.