modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
Avinash250325/T5BaseQuestionGeneration
Avinash250325
2025-06-03T19:28:49Z
0
0
null
[ "safetensors", "t5", "chemistry", "biology", "textbook", "question_generation", "exam", "questions", "evaluation", "true_or_false", "multiple_choice_questions", "descriptive", "short_answer_questions", "long_answer", "problems", "quizzes", "physics", "text2text-generation", "en", "dataset:hotpotqa/hotpot_qa", "dataset:rajpurkar/squad", "dataset:allenai/openbookqa", "dataset:google/boolq", "dataset:ucinlp/drop", "base_model:google-t5/t5-base", "base_model:finetune:google-t5/t5-base", "license:mit", "region:us" ]
text2text-generation
2025-06-03T02:05:03Z
--- license: mit datasets: - hotpotqa/hotpot_qa - rajpurkar/squad - allenai/openbookqa - google/boolq - ucinlp/drop base_model: - google-t5/t5-base pipeline_tag: text2text-generation widget: - text: "<extra_id_97>short answer <extra_id_98>easy <extra_id_99> The sun is the center of our solar system." tags: - chemistry - biology - textbook - question_generation - exam - questions - evaluation - true_or_false - multiple_choice_questions - descriptive - short_answer_questions - long_answer - problems - quizzes - physics language: - en --- # Finetuned T5-Base Question Generator Model This model is a fine-tuned T5 model designed specifically for **automatic question generation** from any given context or passage. It supports different types of questions like **short answer**, **multiple choice question**, and **true or false quesiton**, while also allowing customization by **difficulty level** โ€” easy, medium or hard. --- ## Why is this Project Important? Educational tools, tutoring platforms, and self-learning systems need a way to **generate relevant questions** automatically from content. Our model bridges that gap by providing a flexible and robust question generation system using a **structured prompt** format and powered by a **fine-tuned `T5-base` model**. ### Key Features - Supports **multiple question types**: - Short answer - Multiple choice - True/false - Questions are generated based on: - The **provided context** - The **type of question** - The **difficulty level** - Difficulty reflects the **reasoning depth** required (multi-hop inference). - Uses a **structured prompt format** with clearly defined tags, making it easy to use or integrate into other systems. - Fine-tuned from the `t5-base` model: - Lightweight and fast - Easy to run on CPU - Ideal for customization by teachers or Educational platforms ### Ideal For - Teachers creating quizzes or exam material - EdTech apps generating practice questions - Developers building interactive learning tools - Automated assessment and content enrichment ### Bonus: Retrieval-Augmented Generation (RAG) A **custom RAG function** is also provided. This enables question generation from larger content sources like textbooks: - Input can be a **subheading** or **small excerpt** from a textbook. - The model fetches relevant supporting context form the textbook using a retirever. - Generates questions grounded in the fetched material. This extends the model beyond single-passage generation into more dynamic, scalable educational use cases. --- ## Prompt Format To generate good quality questions, the model uses a **structured input prompt** format with special tokens. This helps the model understand the intent and expected output type. ### Prompt Fields: - `<extra_id_97>` โ€“ followed by the **question type** - `short answer`, `multiple choice question`, or `true or false question` - `<extra_id_98>` โ€“ followed by the **difficulty** - `easy`, `medium`, or `hard` - `<extra_id_99>` โ€“ followed by **[optional answer] context** - `optional answer` โ€“ for targeted question generation, or you can leave it as blank - `context` โ€“ the main passage/content from which questions are generated ### Helper Function to Create the Prompt To simplify prompt construction, use this Python function: ```python def format_prompt(qtype, difficulty, context, answer=""): """ Format input prompt for question generation """ answer_part = f"[{answer}]" if answer else "" return f"<extra_id_97>{qtype} <extra_id_98>{difficulty} <extra_id_99>{answer_part} {context}" ``` --- ## How to Use the Model ```python from transformers import T5Tokenizer, T5ForConditionalGeneration # Load model from Hugging Face Hub model_name = "Avinash250325/T5BaseQuestionGeneration" tokenizer = T5Tokenizer.from_pretrained(model_name) model = T5ForConditionalGeneration.from_pretrained(model_name) # Format input prompt def format_prompt(qtype, difficulty, context, answer=""): answer_part = f"[{answer}]" if answer else "" return f"<extra_id_97>{qtype} <extra_id_98>{difficulty} <extra_id_99>{answer_part} {context}" context = "The sun is the center of our solar system." qtype = "short answer" # qtype: ("short answer", "multiple choice question", "true or false question") difficulty = "easy" # difficulty: ("easy", "medium", "hard") prompt = format_prompt("short answer", "easy", context) # Tokenize and generate inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=150) # Decode output print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```
IDEOS-18-SMS-Rani-Viral-Uncovering-VIDEOS/FULL.VIDEO.SMS.Rani.Viral.Video.Tutorial.Official
IDEOS-18-SMS-Rani-Viral-Uncovering-VIDEOS
2025-06-03T19:26:02Z
0
0
null
[ "region:us" ]
null
2025-06-03T19:25:26Z
[๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐‹๐ข๐ง๐ค )](https://videohere.top/?Hgg) [โ–บโœ… ๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ ๐™๐™ช๐™ก๐™ก ๐™‘๐™ž๐™™๐™š๐™คโค๏ธโค๏ธโฌ‡๏ธโฌ‡๏ธโ€‹](https://videohere.top/?Hgg) [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Hgg)
renejay-and-eva-video/wATCH.renejay-and-eva-renejay-and-eva-renejay-and-eva.original
renejay-and-eva-video
2025-06-03T19:25:38Z
0
0
null
[ "region:us" ]
null
2025-06-03T19:21:32Z
[๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐‹๐ข๐ง๐ค )](https://videohere.top/?renejay-and-eva) [โ–บโœ… ๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ ๐™๐™ช๐™ก๐™ก ๐™‘๐™ž๐™™๐™š๐™คโค๏ธโค๏ธโฌ‡๏ธโฌ‡๏ธโ€‹](https://videohere.top/?renejay-and-eva) [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?renejay-and-eva)
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline_951
luckeciano
2025-06-03T19:25:21Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:DigitalLearningGmbH/MATH-lighteval", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:finetune:Qwen/Qwen2.5-Math-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-03T14:28:08Z
--- base_model: Qwen/Qwen2.5-Math-7B datasets: DigitalLearningGmbH/MATH-lighteval library_name: transformers model_name: Qwen-2.5-7B-GRPO-NoBaseline_951 tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for Qwen-2.5-7B-GRPO-NoBaseline_951 This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline_951", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/8lr7xlha) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.4.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
pot99rta/FoxCide-12B-Forgottenslop-Mell-GGUF
pot99rta
2025-06-03T19:23:11Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:pot99rta/FoxCide-12B-Forgottenslop-Mell", "base_model:quantized:pot99rta/FoxCide-12B-Forgottenslop-Mell", "endpoints_compatible", "region:us" ]
null
2025-06-03T16:04:30Z
--- base_model: pot99rta/FoxCide-12B-Forgottenslop-Mell library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # FoxCide-12B-Forgottenslop-Mell-GGUF ![image/png](https://cdn-uploads.huggingface.co/production/uploads/636ea389fd9751c3d081e88e/iuh54CjgfLk1mVRibROQq.png) ```Models Merged:``` ```1. DoppelReflEx/MN-12B-FoxFrame-Miyuri``` ```2. pot99rta/Patricide-12B-Forgottenslop-Mell``` ```Preset:``` ```Use ChatML or Mistral``` This model was converted to GGUF format from [`pot99rta/FoxCide-12B-Forgottenslop-Mell`](https://huggingface.co/pot99rta/FoxCide-12B-Forgottenslop-Mell) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/pot99rta/FoxCide-12B-Forgottenslop-Mell) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo pot99rta/FoxCide-12B-Forgottenslop-Mell-Q8_0-GGUF --hf-file foxcide-12b-forgottenslop-mell-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo pot99rta/FoxCide-12B-Forgottenslop-Mell-Q8_0-GGUF --hf-file foxcide-12b-forgottenslop-mell-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo pot99rta/FoxCide-12B-Forgottenslop-Mell-Q8_0-GGUF --hf-file foxcide-12b-forgottenslop-mell-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo pot99rta/FoxCide-12B-Forgottenslop-Mell-Q8_0-GGUF --hf-file foxcide-12b-forgottenslop-mell-q8_0.gguf -c 2048 ```
featherless-ai-quants/KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-GGUF
featherless-ai-quants
2025-06-03T19:22:30Z
0
0
null
[ "gguf", "text-generation", "base_model:KaraKaraWitch/spiral-da-HYAH-Qwen2.5-72b", "base_model:quantized:KaraKaraWitch/spiral-da-HYAH-Qwen2.5-72b", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-06-03T17:33:21Z
--- base_model: KaraKaraWitch/spiral-da-HYAH-Qwen2.5-72b pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # KaraKaraWitch/spiral-da-HYAH-Qwen2.5-72b GGUF Quantizations ๐Ÿš€ ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations ๐Ÿ“Š | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-IQ4_XS](https://huggingface.co/featherless-ai-quants/KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-GGUF/tree/main/KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-IQ4_XS) | 38302.65 MB (folder) | | Q2_K | [KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-Q2_K](https://huggingface.co/featherless-ai-quants/KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-GGUF/tree/main/KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-Q2_K) | 28430.71 MB (folder) | | Q3_K_L | [KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-Q3_K_L](https://huggingface.co/featherless-ai-quants/KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-GGUF/tree/main/KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-Q3_K_L) | 37675.12 MB (folder) | | Q3_K_M | [KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-Q3_K_M](https://huggingface.co/featherless-ai-quants/KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-GGUF/tree/main/KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-Q3_K_M) | 35952.31 MB (folder) | | Q3_K_S | [KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-Q3_K_S](https://huggingface.co/featherless-ai-quants/KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-GGUF/tree/main/KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-Q3_K_S) | 32890.12 MB (folder) | | Q4_K_M | [KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-Q4_K_M](https://huggingface.co/featherless-ai-quants/KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-GGUF/tree/main/KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-Q4_K_M) | 45219.15 MB (folder) | | Q4_K_S | [KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-Q4_K_S](https://huggingface.co/featherless-ai-quants/KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-GGUF/tree/main/KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-Q4_K_S) | 41856.03 MB (folder) | | Q5_K_M | [KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-Q5_K_M](https://huggingface.co/featherless-ai-quants/KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-GGUF/tree/main/KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-Q5_K_M) | 51925.15 MB (folder) | | Q5_K_S | [KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-Q5_K_S](https://huggingface.co/featherless-ai-quants/KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-GGUF/tree/main/KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-Q5_K_S) | 48995.15 MB (folder) | | Q6_K | [KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-Q6_K](https://huggingface.co/featherless-ai-quants/KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-GGUF/tree/main/KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-Q6_K) | 61366.68 MB (folder) | | Q8_0 | [KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-Q8_0](https://huggingface.co/featherless-ai-quants/KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-GGUF/tree/main/KaraKaraWitch-spiral-da-HYAH-Qwen2.5-72b-Q8_0) | 73683.37 MB (folder) | --- ## โšก Powered by [Featherless AI](https://featherless.ai) ### Key Features - ๐Ÿ”ฅ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - ๐Ÿ› ๏ธ **Zero Infrastructure** - No server setup or maintenance required - ๐Ÿ“š **Vast Compatibility** - Support for 2400+ models and counting - ๐Ÿ’Ž **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
pointserv/fanniemae-phi-2-lora
pointserv
2025-06-03T19:21:46Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "phi", "autotrain", "text-generation-inference", "text-generation", "peft", "conversational", "base_model:microsoft/phi-2", "base_model:finetune:microsoft/phi-2", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2025-06-03T18:00:52Z
--- tags: - autotrain - text-generation-inference - text-generation - peft library_name: transformers base_model: microsoft/phi-2 widget: - messages: - role: user content: What is your favorite condiment? license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
ivany-wright/Leak.video.18.ivany.wright.twitter.moon.10.ivany.ivany.video.ivannywright
ivany-wright
2025-06-03T19:20:50Z
0
0
null
[ "region:us" ]
null
2025-06-03T19:17:13Z
[๐ŸŒ CLICK HERE ๐ŸŸข==โ–บโ–บ WATCH NOW](https://videohere.top/?V=ivany-wright) [๐Ÿ”ด CLICK HERE ๐ŸŒ==โ–บโ–บ Download Now)](https://videohere.top/?V=ivany-wright) [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=ivany-wright)
BootesVoid/cmbgtzreg055akfxszvn8uap5_cmbgvcfq705aykfxsuh52cbgm
BootesVoid
2025-06-03T19:20:49Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-03T19:20:47Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: FEBO --- # Cmbgtzreg055Akfxszvn8Uap5_Cmbgvcfq705Aykfxsuh52Cbgm <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `FEBO` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "FEBO", "lora_weights": "https://huggingface.co/BootesVoid/cmbgtzreg055akfxszvn8uap5_cmbgvcfq705aykfxsuh52cbgm/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbgtzreg055akfxszvn8uap5_cmbgvcfq705aykfxsuh52cbgm', weight_name='lora.safetensors') image = pipeline('FEBO').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbgtzreg055akfxszvn8uap5_cmbgvcfq705aykfxsuh52cbgm/discussions) to add images that show off what youโ€™ve made with this LoRA.
kowndinya23/ultrafeedback_binarized-tulu-150K-llama-3-8b-1-epochs-alpha-0.8-beta-0.2-2-epochs
kowndinya23
2025-06-03T19:20:36Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:trl-lib/ultrafeedback_binarized", "arxiv:2305.18290", "base_model:kowndinya23/tulu-v2-sft-mixture-150K-llama-3-8b-1-epochs-alpha-0.8-beta-0.2", "base_model:finetune:kowndinya23/tulu-v2-sft-mixture-150K-llama-3-8b-1-epochs-alpha-0.8-beta-0.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-03T15:27:33Z
--- base_model: kowndinya23/tulu-v2-sft-mixture-150K-llama-3-8b-1-epochs-alpha-0.8-beta-0.2 datasets: trl-lib/ultrafeedback_binarized library_name: transformers model_name: ultrafeedback_binarized-tulu-150K-llama-3-8b-1-epochs-alpha-0.8-beta-0.2-2-epochs tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for ultrafeedback_binarized-tulu-150K-llama-3-8b-1-epochs-alpha-0.8-beta-0.2-2-epochs This model is a fine-tuned version of [kowndinya23/tulu-v2-sft-mixture-150K-llama-3-8b-1-epochs-alpha-0.8-beta-0.2](https://huggingface.co/kowndinya23/tulu-v2-sft-mixture-150K-llama-3-8b-1-epochs-alpha-0.8-beta-0.2) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="kowndinya23/ultrafeedback_binarized-tulu-150K-llama-3-8b-1-epochs-alpha-0.8-beta-0.2-2-epochs", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://adobesensei.wandb.io/hrenduchinta/huggingface/runs/oadi1l98) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MinaMila/llama_8b_unlearned_unbalanced_neutral_2nd_1e-6_1.0_0.0_1.0_1.0_epoch2
MinaMila
2025-06-03T19:20:19Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-03T19:17:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BootesVoid/cmb8cojzk0kl2lexpin9308r5_cmbgvgqps05bwkfxs8cwifeyd
BootesVoid
2025-06-03T19:20:11Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-03T19:20:09Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: JADA --- # Cmb8Cojzk0Kl2Lexpin9308R5_Cmbgvgqps05Bwkfxs8Cwifeyd <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `JADA` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "JADA", "lora_weights": "https://huggingface.co/BootesVoid/cmb8cojzk0kl2lexpin9308r5_cmbgvgqps05bwkfxs8cwifeyd/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmb8cojzk0kl2lexpin9308r5_cmbgvgqps05bwkfxs8cwifeyd', weight_name='lora.safetensors') image = pipeline('JADA').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmb8cojzk0kl2lexpin9308r5_cmbgvgqps05bwkfxs8cwifeyd/discussions) to add images that show off what youโ€™ve made with this LoRA.
JoshMe1/ec448619-5095-4d84-a1bc-8083e6241e09
JoshMe1
2025-06-03T19:17:50Z
0
0
peft
[ "peft", "pytorch", "tensorboard", "safetensors", "llama", "arxiv:1910.09700", "base_model:unsloth/tinyllama", "base_model:adapter:unsloth/tinyllama", "region:us" ]
null
2025-06-03T16:28:48Z
--- base_model: unsloth/tinyllama library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
SuperbEmphasis/Mistral-Nemo-R1-ERP-Reasoning-Limit-Function-Q4_K_M-GGUF
SuperbEmphasis
2025-06-03T19:17:03Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:SuperbEmphasis/Mistral-Nemo-R1-ERP-Reasoning-Limit-Function", "base_model:quantized:SuperbEmphasis/Mistral-Nemo-R1-ERP-Reasoning-Limit-Function", "endpoints_compatible", "region:us" ]
null
2025-06-03T19:16:31Z
--- base_model: SuperbEmphasis/Mistral-Nemo-R1-ERP-Reasoning-Limit-Function tags: - llama-cpp - gguf-my-repo --- # SuperbEmphasis/Mistral-Nemo-R1-ERP-Reasoning-Limit-Function-Q4_K_M-GGUF This model was converted to GGUF format from [`SuperbEmphasis/Mistral-Nemo-R1-ERP-Reasoning-Limit-Function`](https://huggingface.co/SuperbEmphasis/Mistral-Nemo-R1-ERP-Reasoning-Limit-Function) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/SuperbEmphasis/Mistral-Nemo-R1-ERP-Reasoning-Limit-Function) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo SuperbEmphasis/Mistral-Nemo-R1-ERP-Reasoning-Limit-Function-Q4_K_M-GGUF --hf-file mistral-nemo-r1-erp-reasoning-limit-function-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo SuperbEmphasis/Mistral-Nemo-R1-ERP-Reasoning-Limit-Function-Q4_K_M-GGUF --hf-file mistral-nemo-r1-erp-reasoning-limit-function-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo SuperbEmphasis/Mistral-Nemo-R1-ERP-Reasoning-Limit-Function-Q4_K_M-GGUF --hf-file mistral-nemo-r1-erp-reasoning-limit-function-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo SuperbEmphasis/Mistral-Nemo-R1-ERP-Reasoning-Limit-Function-Q4_K_M-GGUF --hf-file mistral-nemo-r1-erp-reasoning-limit-function-q4_k_m.gguf -c 2048 ```
natix-miner14/streetvision
natix-miner14
2025-06-03T19:16:33Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-05-30T15:20:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mlx-community/Harbinger-24B-6bit
mlx-community
2025-06-03T19:14:39Z
7
0
mlx
[ "mlx", "safetensors", "mistral", "text adventure", "roleplay", "text-generation", "conversational", "en", "base_model:LatitudeGames/Harbinger-24B", "base_model:quantized:LatitudeGames/Harbinger-24B", "license:apache-2.0", "6-bit", "region:us" ]
text-generation
2025-05-16T10:05:38Z
--- base_model: LatitudeGames/Harbinger-24B tags: - text adventure - roleplay - mlx license: apache-2.0 language: - en library_name: mlx pipeline_tag: text-generation --- # mlx-community/Harbinger-24B-6bit This model [mlx-community/Harbinger-24B-6bit](https://huggingface.co/mlx-community/Harbinger-24B-6bit) was converted to MLX format from [LatitudeGames/Harbinger-24B](https://huggingface.co/LatitudeGames/Harbinger-24B) using mlx-lm version **0.25.0**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Harbinger-24B-6bit") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
krishnavadithya/medgemma-4b-it-sft-lora-crc100k
krishnavadithya
2025-06-03T19:14:28Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/medgemma-4b-it", "base_model:finetune:google/medgemma-4b-it", "endpoints_compatible", "region:us" ]
null
2025-05-29T21:45:51Z
--- base_model: google/medgemma-4b-it library_name: transformers model_name: medgemma-4b-it-sft-lora-crc100k tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for medgemma-4b-it-sft-lora-crc100k This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="krishnavadithya/medgemma-4b-it-sft-lora-crc100k", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.3 - Pytorch: 2.7.0+cu128 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
kowndinya23/ultrafeedback_binarized-tulu-150K-llama-3-8b-1-epochs-alpha-0.4-beta-0.2-2-epochs
kowndinya23
2025-06-03T19:14:09Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:trl-lib/ultrafeedback_binarized", "arxiv:2305.18290", "base_model:kowndinya23/tulu-v2-sft-mixture-150K-llama-3-8b-1-epochs-alpha-0.4-beta-0.2", "base_model:finetune:kowndinya23/tulu-v2-sft-mixture-150K-llama-3-8b-1-epochs-alpha-0.4-beta-0.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-03T15:24:15Z
--- base_model: kowndinya23/tulu-v2-sft-mixture-150K-llama-3-8b-1-epochs-alpha-0.4-beta-0.2 datasets: trl-lib/ultrafeedback_binarized library_name: transformers model_name: ultrafeedback_binarized-tulu-150K-llama-3-8b-1-epochs-alpha-0.4-beta-0.2-2-epochs tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for ultrafeedback_binarized-tulu-150K-llama-3-8b-1-epochs-alpha-0.4-beta-0.2-2-epochs This model is a fine-tuned version of [kowndinya23/tulu-v2-sft-mixture-150K-llama-3-8b-1-epochs-alpha-0.4-beta-0.2](https://huggingface.co/kowndinya23/tulu-v2-sft-mixture-150K-llama-3-8b-1-epochs-alpha-0.4-beta-0.2) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="kowndinya23/ultrafeedback_binarized-tulu-150K-llama-3-8b-1-epochs-alpha-0.4-beta-0.2-2-epochs", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://adobesensei.wandb.io/hrenduchinta/huggingface/runs/t43g6ydu) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
komal-mahawar-video/wATCH.komal.mahawar.viral.video.original
komal-mahawar-video
2025-06-03T19:14:04Z
0
0
null
[ "region:us" ]
null
2025-06-03T19:11:54Z
[๐ŸŒ CLICK HERE ๐ŸŸข==โ–บโ–บ WATCH NOW](https://videohere.top/?V=komal-mahawar) [๐Ÿ”ด CLICK HERE ๐ŸŒ==โ–บโ–บ Download Now)](https://videohere.top/?V=komal-mahawar) [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=komal-mahawar)
Merlinoz11/RpR-v4-Fast-30B-A3B-Q6_K-GGUF
Merlinoz11
2025-06-03T19:11:25Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:ArliAI/RpR-v4-Fast-30B-A3B", "base_model:quantized:ArliAI/RpR-v4-Fast-30B-A3B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-03T19:09:38Z
--- license: apache-2.0 tags: - llama-cpp - gguf-my-repo base_model: ArliAI/RpR-v4-Fast-30B-A3B --- # Merlinoz11/RpR-v4-Fast-30B-A3B-Q6_K-GGUF This model was converted to GGUF format from [`ArliAI/RpR-v4-Fast-30B-A3B`](https://huggingface.co/ArliAI/RpR-v4-Fast-30B-A3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ArliAI/RpR-v4-Fast-30B-A3B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Merlinoz11/RpR-v4-Fast-30B-A3B-Q6_K-GGUF --hf-file rpr-v4-fast-30b-a3b-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Merlinoz11/RpR-v4-Fast-30B-A3B-Q6_K-GGUF --hf-file rpr-v4-fast-30b-a3b-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Merlinoz11/RpR-v4-Fast-30B-A3B-Q6_K-GGUF --hf-file rpr-v4-fast-30b-a3b-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Merlinoz11/RpR-v4-Fast-30B-A3B-Q6_K-GGUF --hf-file rpr-v4-fast-30b-a3b-q6_k.gguf -c 2048 ```
ibuki95/model_172
ibuki95
2025-06-03T19:08:46Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-05-28T18:51:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vaeeedc/mistral-finetuned
vaeeedc
2025-06-03T19:08:35Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-03T19:08:35Z
--- license: apache-2.0 ---
Swissmountain/Llama3.2_1b_q4_k_m
Swissmountain
2025-06-03T19:08:00Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-03T19:07:22Z
--- base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Swissmountain - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Sapna-shah-18s/wATCH.Sapna.shah.viral.video.original
Sapna-shah-18s
2025-06-03T19:06:53Z
0
0
null
[ "region:us" ]
null
2025-06-03T19:05:13Z
[๐ŸŒ CLICK HERE ๐ŸŸข==โ–บโ–บ WATCH NOW](https://videohere.top/?V=Sapna-shah) [๐Ÿ”ด CLICK HERE ๐ŸŒ==โ–บโ–บ Download Now)](https://videohere.top/?V=Sapna-shah) [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=Sapna-shah)
natix-miner13/streetvision
natix-miner13
2025-06-03T19:06:44Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-05-30T15:16:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jusjinuk/gemma-3-27b-it-4bit-GuidedQuant-LNQ
jusjinuk
2025-06-03T19:06:12Z
2
0
null
[ "pytorch", "gemma3", "arxiv:2505.07004", "base_model:google/gemma-3-27b-it", "base_model:quantized:google/gemma-3-27b-it", "license:mit", "region:us" ]
null
2025-06-02T03:15:20Z
--- base_model: - google/gemma-3-27b-it base_model_relation: quantized license: mit --- # Model Card - Base model: `google/gemma-3-27b-it` - Quantization method: LNQ with GuidedQuant Hessian - Target bit-width: 4 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction - num_groups (for GuidedQuant Hessian): 1 # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
GingerBled/MCQA_on_SFT_adam
GingerBled
2025-06-03T19:03:54Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-03T19:03:20Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MeiKing111/v1land_26
MeiKing111
2025-06-03T19:02:33Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-03T18:21:13Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
natix-miner11/streetvision
natix-miner11
2025-06-03T19:02:10Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-05-30T15:08:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MeiKing111/v1land_24
MeiKing111
2025-06-03T19:01:36Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-03T18:20:58Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
c0ntrolZ/merged-largeBase-lora-allMCQA
c0ntrolZ
2025-06-03T18:59:30Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-03T13:03:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
timarni/qwen3-0.6B-mmlu-sciq
timarni
2025-06-03T18:57:00Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "conversational", "dataset:timarni/sciq_alpaca", "base_model:timarni/qwen3-0.6B-mmlu-alpaca", "base_model:finetune:timarni/qwen3-0.6B-mmlu-alpaca", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-03T18:56:25Z
--- library_name: transformers license: apache-2.0 base_model: timarni/qwen3-0.6B-mmlu-alpaca tags: - generated_from_trainer datasets: - timarni/sciq_alpaca model-index: - name: outputs/qwen3_mmlu_sciq_alpaca results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.9.2` ```yaml base_model: timarni/qwen3-0.6B-mmlu-alpaca # Automatically upload checkpoint and final model to HF # hub_model_id: username/custom_model_name plugins: - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin strict: false chat_template: qwen3 datasets: - path: timarni/sciq_alpaca type: alpaca split: train val_set_size: 0.15 output_dir: ./outputs/qwen3_mmlu_sciq_alpaca dataset_prepared_path: last_run_prepared sequence_len: 4096 #2048 sample_packing: true eval_sample_packing: true pad_to_sequence_len: true # To be sure that no LORA is done adapter: null lora: false merge_lora: false wandb_project: mnlp_project wandb_entity: tim-arni wandb_watch: wandb_name: qwen3-0.6B-mmlu-sciq wandb_log_model: gradient_accumulation_steps: 2 micro_batch_size: 1 num_epochs: 1 optimizer: adamw_torch lr_scheduler: cosine learning_rate: 0.00005 # 0.0002 bf16: auto tf32: true gradient_checkpointing: offload gradient_checkpointing_kwargs: use_reentrant: false resume_from_checkpoint: logging_steps: 1 flash_attention: true warmup_steps: 10 evals_per_epoch: 4 saves_per_epoch: 1 weight_decay: 0.0 special_tokens: ``` </details><br> # outputs/qwen3_mmlu_sciq_alpaca This model is a fine-tuned version of [timarni/qwen3-0.6B-mmlu-alpaca](https://huggingface.co/timarni/qwen3-0.6B-mmlu-alpaca) on the timarni/sciq_alpaca dataset. It achieves the following results on the evaluation set: - Loss: 0.0583 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.149 | 0.0127 | 1 | 0.0983 | | 0.0755 | 0.2532 | 20 | 0.0744 | | 0.0678 | 0.5063 | 40 | 0.0625 | | 0.0564 | 0.7595 | 60 | 0.0583 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.5.1+cu121 - Datasets 3.5.1 - Tokenizers 0.21.1
Swissmountain/Llama3.2_1b_16bits
Swissmountain
2025-06-03T18:56:28Z
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-03T18:49:20Z
--- base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Swissmountain - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit - **Dataset :** MaziyarPanahi/french_instruct_sharegpt This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
jobz-hunting-sajal-malik-18s/wATCH.jobz.hunting.sajal.malik.viral.video.original
jobz-hunting-sajal-malik-18s
2025-06-03T18:56:24Z
0
0
null
[ "region:us" ]
null
2025-06-03T18:54:44Z
[๐ŸŒ CLICK HERE ๐ŸŸข==โ–บโ–บ WATCH NOW](https://videohere.top/?V=jobz-hunting-sajal-malik) [๐Ÿ”ด CLICK HERE ๐ŸŒ==โ–บโ–บ Download Now)](https://videohere.top/?V=jobz-hunting-sajal-malik) [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=jobz-hunting-sajal-malik)
ypwang61/sharp_s540
ypwang61
2025-06-03T18:54:59Z
0
0
null
[ "safetensors", "qwen2", "license:apache-2.0", "region:us" ]
null
2025-06-03T18:51:38Z
--- license: apache-2.0 ---
natix-miner7/streetvision
natix-miner7
2025-06-03T18:54:28Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-05-30T14:48:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
adqdr4/VIDEO.18.hdkhiriearn.in.sarkari.video.viral.hdkhiriearn.in.korean.viral.video
adqdr4
2025-06-03T18:54:25Z
0
0
null
[ "region:us" ]
null
2025-06-03T18:53:19Z
<a href="https://allyoutubers.com/ivany-wright-twitter-ivany-wright-video-leak-moon-10-ivany-moon-10-com"> ๐ŸŒ VIDEO.18.hdkhiriearn.in.sarkari.video.viral.hdkhiriearn.in.korean.viral.video ๐Ÿ”ด โžคโ–บDOWNLOAD๐Ÿ‘‰๐Ÿ‘‰๐ŸŸข โžค <a href="https://allyoutubers.com/ivany-wright-twitter-ivany-wright-video-leak-moon-10-ivany-moon-10-com"> ๐ŸŒ VIDEO.18.hdkhiriearn.in.sarkari.video.viral.hdkhiriearn.in.korean.viral.video <a href="https://allyoutubers.com/ivany-wright-twitter-ivany-wright-video-leak-moon-10-ivany-moon-10-com"> ๐ŸŒ VIDEO.18.hdkhiriearn.in.sarkari.video.viral.hdkhiriearn.in.korean.viral.video ๐Ÿ”ด โžคโ–บDOWNLOAD๐Ÿ‘‰๐Ÿ‘‰๐ŸŸข โžค <a href="https://allyoutubers.com/ivany-wright-twitter-ivany-wright-video-leak-moon-10-ivany-moon-10-com"> ๐ŸŒ VIDEO.18.hdkhiriearn.in.sarkari.video.viral.hdkhiriearn.in.korean.viral.video
MinaMila/llama_8b_unlearned_unbalanced_neutral_2nd_1e-6_0.0_1.0_0.0_1.0_epoch1
MinaMila
2025-06-03T18:54:21Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-03T18:51:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Confused404/eng-gmq-finetuned-no
Confused404
2025-06-03T18:53:25Z
16
0
null
[ "pytorch", "safetensors", "marian", "translation", "en", "no", "dataset:Helsinki-NLP/opus-100", "base_model:Helsinki-NLP/opus-mt-en-gmq", "base_model:finetune:Helsinki-NLP/opus-mt-en-gmq", "license:apache-2.0", "region:us" ]
translation
2025-06-02T22:37:03Z
--- language: - en - no license: apache-2.0 tags: - translation - marian - pytorch model_type: marian pipeline_tag: translation datasets: - Helsinki-NLP/opus-100 base_model: - Helsinki-NLP/opus-mt-en-gmq widget: - source: "Hello, how are you?" example_title: "EN โ†’ NO" --- # My Finetuned MarianMT Model (English --> Norwegian) This model is a fine-tuned version of `Helsinki-NLP/opus-mt-en-gmq` on a custom dataset. ## Usage ```python from transformers import MarianMTModel, MarianTokenizer model_name = "Confused404/my-finetuned-marian" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) text = "Hello, how are you?" batch = tokenizer.prepare_seq2seq_batch([text], return_tensors="pt") translated = model.generate(**batch) print(tokenizer.decode(translated[0], skip_special_tokens=True))
ypwang61/sharp_s680
ypwang61
2025-06-03T18:52:59Z
0
0
null
[ "safetensors", "qwen2", "license:apache-2.0", "region:us" ]
null
2025-06-03T18:49:41Z
--- license: apache-2.0 ---
Kwts/OntologySummarizer
Kwts
2025-06-03T18:52:45Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "base_model:adapter:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "region:us" ]
null
2025-06-02T21:50:08Z
--- base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
madhueb/qwen-stem-instruct
madhueb
2025-06-03T18:51:24Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:Qwen/Qwen3-0.6B-Base", "base_model:finetune:Qwen/Qwen3-0.6B-Base", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-30T03:30:01Z
--- base_model: Qwen/Qwen3-0.6B-Base library_name: transformers model_name: qwen-stem-instruct tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for qwen-stem-instruct This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="madhueb/qwen-stem-instruct", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
BootesVoid/cmarb2y5d01hiu1cg10lgxkch_cmbgu563i055okfxsp0c94qj9
BootesVoid
2025-06-03T18:49:46Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-03T18:49:45Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: HOT --- # Cmarb2Y5D01Hiu1Cg10Lgxkch_Cmbgu563I055Okfxsp0C94Qj9 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `HOT` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "HOT", "lora_weights": "https://huggingface.co/BootesVoid/cmarb2y5d01hiu1cg10lgxkch_cmbgu563i055okfxsp0c94qj9/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmarb2y5d01hiu1cg10lgxkch_cmbgu563i055okfxsp0c94qj9', weight_name='lora.safetensors') image = pipeline('HOT').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmarb2y5d01hiu1cg10lgxkch_cmbgu563i055okfxsp0c94qj9/discussions) to add images that show off what youโ€™ve made with this LoRA.
covecove/cheekyb_schnell
covecove
2025-06-03T18:48:34Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-schnell", "base_model:adapter:black-forest-labs/FLUX.1-schnell", "license:other", "region:us" ]
text-to-image
2025-06-03T18:34:58Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: smiling cheekyb statue output: url: images/1748950640471__000003000_1.jpg - text: a group of cheekyb standing around a screen that shows a crypto chart output: url: images/1748950668154__000003000_2.jpg - text: beautifully rendered cheekyb chick, very cute, sanrio style, it looks dumb output: url: images/1748950613158__000003000_0.jpg base_model: black-forest-labs/FLUX.1-schnell instance_prompt: cheekyb license: other license_name: vpl license_link: https://viralpubliclicense.org/VPL.txt --- # cheekyb schnell <Gallery /> ## Model description ![1748950640471__000003000_1.jpg](https:&#x2F;&#x2F;cdn-uploads.huggingface.co&#x2F;production&#x2F;uploads&#x2F;630cd6487dacb93b33536b7c&#x2F;ZMJg5v8v3u4XSaSFAShna.jpeg) ## Trigger words You should use `cheekyb` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/covecove/cheekyb_schnell/tree/main) them in the Files & versions tab.
salmankhanpm/g3-un
salmankhanpm
2025-06-03T18:45:03Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma3", "trl", "en", "base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-03T18:44:49Z
--- base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** salmankhanpm - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
MigtheWise/ppo-testing-lunarlander
MigtheWise
2025-06-03T18:44:00Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-06-03T18:43:33Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -153.81 +/- 47.21 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
huihui-ai/DeepSeek-R1-0528-GGUF
huihui-ai
2025-06-03T18:43:09Z
0
1
transformers
[ "transformers", "gguf", "deepseek", "base_model:deepseek-ai/DeepSeek-R1-0528", "base_model:quantized:deepseek-ai/DeepSeek-R1-0528", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-03T02:24:16Z
--- license: mit library_name: transformers base_model: - deepseek-ai/DeepSeek-R1-0528 tags: - deepseek - transformers --- # huihui-ai/DeepSeek-R1-0528-GGUF This model converted from DeepSeek-R1-0528 to BF16. Here we simply provide the conversion command and related information about ollama. ## FP8 to BF16 1. Download [deepseek-ai/DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) model, requires approximately 641GB of space. ``` cd /home/admin/models huggingface-cli download deepseek-ai/DeepSeek-R1-0528 --local-dir ./deepseek-ai/DeepSeek-R1-0528 ``` 2. Create the environment. ``` conda create -yn DeepSeek-V3 python=3.12 conda activate DeepSeek-V3 pip install -r requirements.txt ``` 3. Convert to BF16, requires an additional approximately 1.3 TB of space. Here, you need to download the transformation code from the "inference" folder of [deepseek-ai/DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3) ``` cd deepseek-ai/DeepSeek-V3/inference python fp8_cast_bf16.py --input-fp8-hf-path /home/admin/models/deepseek-ai/DeepSeek-R1-0528/ --output-bf16-hf-path /home/admin/models/deepseek-ai/DeepSeek-R1-0528-bf16 ``` ## BF16 to f16.gguf 1. Use the [llama.cpp](https://github.com/ggml-org/llama.cpp) conversion program to convert DeepSeek-R1-0528-bf16 to gguf format, requires an additional approximately 1.3 TB of space. ``` python convert_hf_to_gguf.py /home/admin/models/deepseek-ai/DeepSeek-R1-0528-bf16 --outfile /home/admin/models/deepseek-ai/DeepSeek-R1-0528-bf16/ggml-model-f16.gguf --outtype f16 ``` 2. Use the [llama.cpp](https://github.com/ggml-org/llama.cpp) quantitative program to quantitative model (llama-quantize needs to be compiled.), other [quant option](https://github.com/ggml-org/llama.cpp/blob/master/tools/quantize/quantize.cpp). Convert first Q2_K, requires an additional approximately 227 GB of space. ``` llama-quantize /home/admin/models/deepseek-ai/DeepSeek-R1-0528-bf16/ggml-model-f16.gguf /home/admin/models/deepseek-ai/DeepSeek-R1-0528-bf16/ggml-model-Q2_K.gguf Q2_K ``` 3. Use llama-cli to test. ``` llama-cli -m /home/admin/models/deepseek-ai/DeepSeek-R1-0528-bf16/ggml-model-Q2_K.gguf -n 2048 ``` ## Use with ollama **Note:** this model requires [Ollama 0.9](https://github.com/ollama/ollama/releases/tag/v0.9.0) You can use [huihui_ai/deepseek-r1:671b-0528-Q2_K](https://ollama.com/huihui_ai/deepseek-r1:671b-0528-Q2_K) directly ``` ollama run huihui_ai/deepseek-r1:671b-0528-Q2_K ```
andrewdalpino/ESM2-35M-Protein-Biological-Process
andrewdalpino
2025-06-03T18:41:43Z
35
0
transformers
[ "transformers", "tensorboard", "onnx", "safetensors", "esm", "text-classification", "gene-ontology", "proteomics", "dataset:andrewdalpino/AmiGO", "base_model:facebook/esm2_t12_35M_UR50D", "base_model:quantized:facebook/esm2_t12_35M_UR50D", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-13T03:37:13Z
--- library_name: transformers tags: - gene-ontology - proteomics datasets: - andrewdalpino/AmiGO metrics: - precision - recall - f1 base_model: - facebook/esm2_t12_35M_UR50D pipeline_tag: text-classification --- # ESM2 Protein Function Caller An Evolutionary-scale Model (ESM) for protein function prediction from amino acid sequences using the Gene Ontology (GO). Based on the ESM2 Transformer architecture, pre-trained on [UniRef50](https://www.uniprot.org/help/uniref), and fine-tuned on the [AmiGO](https://huggingface.co/datasets/andrewdalpino/AmiGO) dataset, this model predicts the GO subgraph for a particular protein sequence - giving you insight into the molecular function, biological process, and location of the activity inside the cell. **Note**: This version only models the `biological process` subgraph of the gene ontology. ## What are GO terms? > "The Gene Ontology (GO) is a concept hierarchy that describes the biological function of genes and gene products at different levels of abstraction (Ashburner et al., 2000). It is a good model to describe the multi-faceted nature of protein function." > "GO is a directed acyclic graph. The nodes in this graph are functional descriptors (terms or classes) connected by relational ties between them (is_a, part_of, etc.). For example, terms 'protein binding activity' and 'binding activity' are related by an is_a relationship; however, the edge in the graph is often reversed to point from binding towards protein binding. This graph contains three subgraphs (subontologies): Molecular Function (MF), Biological Process (BP), and Cellular Component (CC), defined by their root nodes. Biologically, each subgraph represent a different aspect of the protein's function: what it does on a molecular level (MF), which biological processes it participates in (BP) and where in the cell it is located (CC)." From [CAFA 5 Protein Function Prediction](https://www.kaggle.com/competitions/cafa-5-protein-function-prediction/data) ## Code Repository https://github.com/andrewdalpino/esm2-function-classifier ## Model Specs - **Vocabulary Size**: 33 - **Embedding Dimensions**: 480 - **Attention Heads**: 20 - **Encoder Layers**: 12 - **Context Length**: 1026 ## Basic Example For a basic demonstration we can rank the GO terms for a particular sequence. For a more advanced example see the [predict-subgraph.py](https://github.com/andrewdalpino/esm2-function-classifier/blob/master/predict-subgraph.py) source file. ```python import torch from transformers import EsmTokenizer, EsmForSequenceClassification model_name = "andrewdalpino/ESM2-35M-Protein-Biological-Process" tokenizer = EsmTokenizer.from_pretrained(model_name) model = EsmForSequenceClassification.from_pretrained(model_name) model.eval() sequence = "MCNAWYISVDFEKNREDKSKCIHTRRNSGPKLLEHVMYEVLRDWYCLEGENVYMM" top_k = 10 out = tokenizer(sequence) input_ids = out["input_ids"] input_ids = torch.tensor(input_ids, dtype=torch.int64).unsqueeze(0) with torch.no_grad(): outputs = model.forward(input_ids) probabilities = torch.sigmoid(outputs.logits.squeeze(0)) probabilities, indices = torch.topk(probabilities, top_k) probabilities = probabilities.tolist() terms = [model.config.id2label[index] for index in indices.tolist()] print(f"Top {args.top_k} GO Terms:") for term, probability in zip(terms, probabilities): print(f"{probability:.4f}: {term}") ``` ## References: >- A. Rives, et al. Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences, 2021. >- Z. Lin, et al. Evolutionary-scale prediction of atomic level protein structure with a language model, 2022. >- G. A. Merino, et al. Hierarchical deep learning for predicting GO annotations by integrating protein knowledge, 2022. >- M. Ashburner, et al. Gene Ontology: tool for the unification of biology, 2000.
aitasum/bart_aitasum_fusion
aitasum
2025-06-03T18:40:25Z
0
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-03T18:12:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BilateralBusiness/perma_chef_mandil_tex_negro_caballero_1_20250603_1819
BilateralBusiness
2025-06-03T18:39:46Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-03T18:27:31Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: perma_chef_mandil_tex_negro_caballero_1_20250603_1819 --- # Perma_Chef_Mandil_Tex_Negro_Caballero_1_20250603_1819 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `perma_chef_mandil_tex_negro_caballero_1_20250603_1819` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "perma_chef_mandil_tex_negro_caballero_1_20250603_1819", "lora_weights": "https://huggingface.co/BilateralBusiness/perma_chef_mandil_tex_negro_caballero_1_20250603_1819/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BilateralBusiness/perma_chef_mandil_tex_negro_caballero_1_20250603_1819', weight_name='lora.safetensors') image = pipeline('perma_chef_mandil_tex_negro_caballero_1_20250603_1819').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BilateralBusiness/perma_chef_mandil_tex_negro_caballero_1_20250603_1819/discussions) to add images that show off what youโ€™ve made with this LoRA.
Cikgu-Fadhilah-Viral-6/18.Original.Full.Clip.Cikgu.Fadhilah.Viral.Video.Leaks.Official
Cikgu-Fadhilah-Viral-6
2025-06-03T18:35:40Z
0
0
null
[ "region:us" ]
null
2025-06-03T18:31:46Z
[๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐‹๐ข๐ง๐ค )](https://videohere.top/?Cikgu-Fadhilah) [โ–บโœ… ๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ ๐™๐™ช๐™ก๐™ก ๐™‘๐™ž๐™™๐™š๐™คโค๏ธโค๏ธโฌ‡๏ธโฌ‡๏ธโ€‹](https://videohere.top/?Cikgu-Fadhilah) [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Cikgu-Fadhilah)
AnthonyGosselin/Ctrl-Crash
AnthonyGosselin
2025-06-03T18:34:03Z
0
2
diffusers
[ "diffusers", "arxiv:2506.00227", "base_model:stabilityai/stable-video-diffusion-img2vid-xt-1-1", "base_model:finetune:stabilityai/stable-video-diffusion-img2vid-xt-1-1", "license:cc-by-4.0", "region:us" ]
null
2025-06-02T18:49:58Z
--- license: cc-by-4.0 base_model: - stabilityai/stable-video-diffusion-img2vid-xt-1-1 --- # Model Card for Ctrl-Crash Generate car crash videos from an initial frame, using bounding-box and crash type control signals. <p align="center"> <table cellspacing="0" cellpadding="0"> <tr> <td><img src="etc/genvid_57_11_04453.gif" width="512"></td> <td><img src="etc/genvid_64_48_08386.gif" width="512"></td> <td><img src="etc/genvid_87_21_08924.gif" width="512"></td> </tr> </table> </p> (Above) Examples of generated crashes ## Model Details <p align="left"> <img src="architecture_figure.png" width=800> </p> <!-- TODO: Provide a longer summary of what this model is. --> - Visit the **project page** for demos: https://anthonygosselin.github.io/Ctrl-Crash-ProjectPage/ - Visit the **repository** to get started: https://github.com/AnthonyGosselin/Ctrl-Crash - Read the **paper** for more details: https://arxiv.org/abs/2506.00227 This model uses the Stability AI Image-to-Video model (SVD 1.1) as a base model: https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt-1-1 ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> <!-- TODO: Here we can describe the different operation modes (Reconstruction, Prediction and counterfactuals) --> Ctrl-Crash supports different task settings, each enabled by varying the available control signals, namely: - **(1) Crash Reconstruction**: Given an initial image, full bounding box sequence, and a crash type, the model reconstructs a consistent video combining the visual context of the initial frame with agent motion derived from the bounding boxes. - **(2) Crash Prediction**: Given the initial frame and only a few initial bounding box frames (e.g., 0โ€“9), the model predicts the future motion of agents in a way that aligns with the target crash type. - **(3) Crash Counterfactuals**: Extending the prediction task, this mode varies the crash type signal while keeping other inputs fixed, enabling the generation of multiple plausible outcomes for the same sceneโ€”supporting counterfactual safety reasoning. ## Bias, Risks, and Limitations Despite its strong performance, our approach has several limitations, which motivates future work in this direction. - Counterfactual outcomes can be hard to generate when initial scene conditions conflict with the desired crash type. - The model relies heavily on bounding boxes, making it sensitive to tracking errorsโ€”especially in fully conditioned reconstruction. - With no bounding boxes conditioning, motion direction can be ambiguous, and 2D boxes struggle to capture rotation or orientation, limiting realism in behaviors like spinouts - Does not support text conditioning **BibTeX:** ```bibtex @misc{gosselin2025ctrlcrashcontrollablediffusionrealistic, title={Ctrl-Crash: Controllable Diffusion for Realistic Car Crashes}, author={Anthony Gosselin and Ge Ya Luo and Luis Lara and Florian Golemo and Derek Nowrouzezahrai and Liam Paull and Alexia Jolicoeur-Martineau and Christopher Pal}, year={2025}, eprint={2506.00227}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2506.00227}, } ```
ffurfaro/Titans-Llama-3.2-1B
ffurfaro
2025-06-03T18:33:47Z
0
0
peft
[ "peft", "safetensors", "tptt", "custom_code", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.2-1B", "base_model:adapter:meta-llama/Llama-3.2-1B", "region:us" ]
null
2025-06-03T18:33:44Z
--- base_model: meta-llama/Llama-3.2-1B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
amene-gafsi/MNLP_M2_document_encoder
amene-gafsi
2025-06-03T18:33:35Z
15
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "en", "dataset:s2orc", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:ms_marco", "dataset:gooaq", "dataset:yahoo_answers_topics", "dataset:code_search_net", "dataset:search_qa", "dataset:eli5", "dataset:snli", "dataset:multi_nli", "dataset:wikihow", "dataset:natural_questions", "dataset:trivia_qa", "dataset:embedding-data/sentence-compression", "dataset:embedding-data/flickr30k-captions", "dataset:embedding-data/altlex", "dataset:embedding-data/simple-wiki", "dataset:embedding-data/QQP", "dataset:embedding-data/SPECTER", "dataset:embedding-data/PAQ_pairs", "dataset:embedding-data/WikiAnswers", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-05-26T21:49:15Z
--- language: en license: apache-2.0 library_name: sentence-transformers tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers datasets: - s2orc - flax-sentence-embeddings/stackexchange_xml - ms_marco - gooaq - yahoo_answers_topics - code_search_net - search_qa - eli5 - snli - multi_nli - wikihow - natural_questions - trivia_qa - embedding-data/sentence-compression - embedding-data/flickr30k-captions - embedding-data/altlex - embedding-data/simple-wiki - embedding-data/QQP - embedding-data/SPECTER - embedding-data/PAQ_pairs - embedding-data/WikiAnswers pipeline_tag: sentence-similarity --- # all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developed this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developed this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intended to be used as a sentence and short paragraph encoder. Given an input text, it outputs a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 256 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained our model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
ArliAI/Qwen2.5-72B-ArliAI-RPMax-v2
ArliAI
2025-06-03T18:30:38Z
0
0
null
[ "safetensors", "qwen2", "license:other", "region:us" ]
null
2025-06-03T15:54:12Z
--- license: other license_name: qwen license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE ---
0xarchit/ai_teaching_assistant
0xarchit
2025-06-03T18:29:54Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-03T18:00:29Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** 0xarchit - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/ORZ-R1-Distill-Qwen-14B-GGUF
mradermacher
2025-06-03T18:29:18Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:Open-Reasoner-Zero/ORZ-R1-Distill-Qwen-14B", "base_model:quantized:Open-Reasoner-Zero/ORZ-R1-Distill-Qwen-14B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-03T17:06:27Z
--- base_model: Open-Reasoner-Zero/ORZ-R1-Distill-Qwen-14B language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Open-Reasoner-Zero/ORZ-R1-Distill-Qwen-14B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ORZ-R1-Distill-Qwen-14B-GGUF/resolve/main/ORZ-R1-Distill-Qwen-14B.Q2_K.gguf) | Q2_K | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/ORZ-R1-Distill-Qwen-14B-GGUF/resolve/main/ORZ-R1-Distill-Qwen-14B.Q3_K_S.gguf) | Q3_K_S | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/ORZ-R1-Distill-Qwen-14B-GGUF/resolve/main/ORZ-R1-Distill-Qwen-14B.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ORZ-R1-Distill-Qwen-14B-GGUF/resolve/main/ORZ-R1-Distill-Qwen-14B.Q3_K_L.gguf) | Q3_K_L | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/ORZ-R1-Distill-Qwen-14B-GGUF/resolve/main/ORZ-R1-Distill-Qwen-14B.IQ4_XS.gguf) | IQ4_XS | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/ORZ-R1-Distill-Qwen-14B-GGUF/resolve/main/ORZ-R1-Distill-Qwen-14B.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ORZ-R1-Distill-Qwen-14B-GGUF/resolve/main/ORZ-R1-Distill-Qwen-14B.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ORZ-R1-Distill-Qwen-14B-GGUF/resolve/main/ORZ-R1-Distill-Qwen-14B.Q5_K_S.gguf) | Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/ORZ-R1-Distill-Qwen-14B-GGUF/resolve/main/ORZ-R1-Distill-Qwen-14B.Q5_K_M.gguf) | Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/ORZ-R1-Distill-Qwen-14B-GGUF/resolve/main/ORZ-R1-Distill-Qwen-14B.Q6_K.gguf) | Q6_K | 12.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ORZ-R1-Distill-Qwen-14B-GGUF/resolve/main/ORZ-R1-Distill-Qwen-14B.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
9T1n/Gemma-3-1b-finetuned
9T1n
2025-06-03T18:27:28Z
0
1
null
[ "safetensors", "license:mit", "region:us" ]
null
2025-06-03T17:48:24Z
--- license: mit --- # Gemma-3-1b-finetuned > ฤรขy lร  phiรชn bแบฃn Gemma 3 1B ฤ‘ฦฐแปฃc fine-tune cho nhiแป‡m vแปฅ tฦฐฦกng tรกc vร  hแป— trแปฃ chฤƒm sรณc bแป‡nh nhรขn cao tuแป•i ## Overview - **Tรชn model**: `9T1n/Gemma-3-1b-finetuned` - **Kiแบฟn trรบc gแป‘c**: Gemma 3 1B (nแปn tแบฃng LLaMA-like) - **Nhiแป‡m vแปฅ**: tฦฐฦกng tรกc vร  hแป— trแปฃ chฤƒm sรณc bแป‡nh nhรขn cao tuแป•i - **Ngรดn ngแปฏ**: Tiแบฟng Viแป‡t - **Mแปฅc ฤ‘รญch**: - Fine-tune Gemma-3-1B ฤ‘แปƒ tแบกo ra mแป™t model chat cรณ khแบฃ nฤƒng phแบฃn hแป“i tแปฑ nhiรชn, hiแปƒu ngแปฏ cแบฃnh giao tiแบฟp vแป›i bแป‡nh nhรขn cao tuแป•i. - Mรด hรฌnh ฤ‘รฃ ฤ‘ฦฐแปฃc tinh chแป‰nh trรชn tแบญp dแปฏ liแป‡u huแบฅn luyแป‡n do nhรณm sinh viรชn tแปฑ tแบกo. ## Repository Structure Gemma-3-1b-finetuned/ โ”œโ”€โ”€ added_tokens.json โ”œโ”€โ”€ config.json โ”œโ”€โ”€ generation_config.json โ”œโ”€โ”€ model.safetensors โ”œโ”€โ”€ special_tokens_map.json โ”œโ”€โ”€ tokenizer.json โ”œโ”€โ”€ tokenizer.model โ”œโ”€โ”€ tokenizer_config.json โ””โ”€โ”€ README.md ## License MIT License Copyright (c) 2025 Hoร ng Ngแปc Quang
amene-gafsi/MNLP_M2_document_encoder2
amene-gafsi
2025-06-03T18:27:15Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "en", "dataset:s2orc", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:ms_marco", "dataset:gooaq", "dataset:yahoo_answers_topics", "dataset:code_search_net", "dataset:search_qa", "dataset:eli5", "dataset:snli", "dataset:multi_nli", "dataset:wikihow", "dataset:natural_questions", "dataset:trivia_qa", "dataset:embedding-data/sentence-compression", "dataset:embedding-data/flickr30k-captions", "dataset:embedding-data/altlex", "dataset:embedding-data/simple-wiki", "dataset:embedding-data/QQP", "dataset:embedding-data/SPECTER", "dataset:embedding-data/PAQ_pairs", "dataset:embedding-data/WikiAnswers", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-06-03T18:26:28Z
--- language: en license: apache-2.0 library_name: sentence-transformers tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers datasets: - s2orc - flax-sentence-embeddings/stackexchange_xml - ms_marco - gooaq - yahoo_answers_topics - code_search_net - search_qa - eli5 - snli - multi_nli - wikihow - natural_questions - trivia_qa - embedding-data/sentence-compression - embedding-data/flickr30k-captions - embedding-data/altlex - embedding-data/simple-wiki - embedding-data/QQP - embedding-data/SPECTER - embedding-data/PAQ_pairs - embedding-data/WikiAnswers pipeline_tag: sentence-similarity --- # all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developed this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developed this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intended to be used as a sentence and short paragraph encoder. Given an input text, it outputs a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 256 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained our model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
mradermacher/Gemma-3-Fornax-V3-27B-GGUF
mradermacher
2025-06-03T18:27:14Z
0
1
transformers
[ "transformers", "gguf", "gemma3", "gemma", "google", "en", "base_model:ConicCat/Gemma-3-Fornax-V3-27B", "base_model:quantized:ConicCat/Gemma-3-Fornax-V3-27B", "license:gemma", "endpoints_compatible", "region:us" ]
null
2025-06-03T17:19:40Z
--- base_model: ConicCat/Gemma-3-Fornax-V3-27B language: - en library_name: transformers license: gemma quantized_by: mradermacher tags: - gemma3 - gemma - google --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/ConicCat/Gemma-3-Fornax-V3-27B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Gemma-3-Fornax-V3-27B-GGUF/resolve/main/Gemma-3-Fornax-V3-27B.Q2_K.gguf) | Q2_K | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-3-Fornax-V3-27B-GGUF/resolve/main/Gemma-3-Fornax-V3-27B.Q3_K_S.gguf) | Q3_K_S | 12.3 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-3-Fornax-V3-27B-GGUF/resolve/main/Gemma-3-Fornax-V3-27B.Q3_K_M.gguf) | Q3_K_M | 13.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Gemma-3-Fornax-V3-27B-GGUF/resolve/main/Gemma-3-Fornax-V3-27B.Q3_K_L.gguf) | Q3_K_L | 14.6 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-3-Fornax-V3-27B-GGUF/resolve/main/Gemma-3-Fornax-V3-27B.Q4_K_S.gguf) | Q4_K_S | 15.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Gemma-3-Fornax-V3-27B-GGUF/resolve/main/Gemma-3-Fornax-V3-27B.Q4_K_M.gguf) | Q4_K_M | 16.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Gemma-3-Fornax-V3-27B-GGUF/resolve/main/Gemma-3-Fornax-V3-27B.Q5_K_S.gguf) | Q5_K_S | 18.9 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-3-Fornax-V3-27B-GGUF/resolve/main/Gemma-3-Fornax-V3-27B.Q6_K.gguf) | Q6_K | 22.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Gemma-3-Fornax-V3-27B-GGUF/resolve/main/Gemma-3-Fornax-V3-27B.Q8_0.gguf) | Q8_0 | 28.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
embodied-navigation/qwen2.5_vl_3B_top_down_panoimage_pred_sft
embodied-navigation
2025-06-03T18:25:08Z
5
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "generated_from_trainer", "trl", "sft", "conversational", "base_model:Qwen/Qwen2.5-VL-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-02T22:00:33Z
--- base_model: Qwen/Qwen2.5-VL-3B-Instruct library_name: transformers model_name: qwen2.5_vl_3B_top_down_panoimage_pred_sft tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for qwen2.5_vl_3B_top_down_panoimage_pred_sft This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="embodied-navigation/qwen2.5_vl_3B_top_down_panoimage_pred_sft", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/SFU_polito_visiting/VLA_reasoning_navigation/runs/l5wnyqwl) This model was trained with SFT. ### Framework versions - TRL: 0.16.0 - Transformers: 4.49.0 - Pytorch: 2.7.0+cu128 - Datasets: 3.4.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Boevers/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bellowing_polished_caterpillar
Boevers
2025-06-03T18:22:33Z
21
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am bellowing polished caterpillar", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-08T23:38:02Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bellowing_polished_caterpillar tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am bellowing polished caterpillar - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bellowing_polished_caterpillar This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Boevers/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bellowing_polished_caterpillar", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
nickeubank/leaa_grant_subjects_2digits_invweighted
nickeubank
2025-06-03T18:20:20Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-03T17:50:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mandell/dqn-SpaceInvadersNoFrameskip-v4
mandell
2025-06-03T18:20:02Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-06-03T18:19:09Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 756.00 +/- 79.74 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib SBX (SB3 + Jax): https://github.com/araffin/sbx Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mandell -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mandell -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mandell ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
Ak128umar/distilbert-base-uncased-finetuned-imdb
Ak128umar
2025-06-03T18:16:31Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "fill-mask", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-06-03T18:01:15Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4828 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6916 | 1.0 | 157 | 2.5012 | | 2.5716 | 2.0 | 314 | 2.4708 | | 2.5272 | 3.0 | 471 | 2.4558 | ### Framework versions - Transformers 4.52.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
MinaMila/llama_8b_unlearned_unbalanced_neutral_2nd_1e-6_0.0_0.0_0.0_1.0_epoch1
MinaMila
2025-06-03T18:15:18Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-03T18:12:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kowndinya23/ultrafeedback_binarized-tulu-150K-llama-3-1b-1-epochs-alpha-0.8-beta-0.6-2-epochs
kowndinya23
2025-06-03T18:13:02Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:trl-lib/ultrafeedback_binarized", "arxiv:2305.18290", "base_model:kowndinya23/tulu-v2-sft-mixture-150K-llama-3-1b-1-epochs-alpha-0.8-beta-0.6", "base_model:finetune:kowndinya23/tulu-v2-sft-mixture-150K-llama-3-1b-1-epochs-alpha-0.8-beta-0.6", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-03T17:17:32Z
--- base_model: kowndinya23/tulu-v2-sft-mixture-150K-llama-3-1b-1-epochs-alpha-0.8-beta-0.6 datasets: trl-lib/ultrafeedback_binarized library_name: transformers model_name: ultrafeedback_binarized-tulu-150K-llama-3-1b-1-epochs-alpha-0.8-beta-0.6-2-epochs tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for ultrafeedback_binarized-tulu-150K-llama-3-1b-1-epochs-alpha-0.8-beta-0.6-2-epochs This model is a fine-tuned version of [kowndinya23/tulu-v2-sft-mixture-150K-llama-3-1b-1-epochs-alpha-0.8-beta-0.6](https://huggingface.co/kowndinya23/tulu-v2-sft-mixture-150K-llama-3-1b-1-epochs-alpha-0.8-beta-0.6) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="kowndinya23/ultrafeedback_binarized-tulu-150K-llama-3-1b-1-epochs-alpha-0.8-beta-0.6-2-epochs", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://adobesensei.wandb.io/hrenduchinta/huggingface/runs/zjwchzuy) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
FormlessAI/6fbe1cf0-8feb-44d1-b103-5681ee13f2ec
FormlessAI
2025-06-03T18:10:08Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "grpo", "arxiv:2402.03300", "base_model:unsloth/SmolLM-135M", "base_model:finetune:unsloth/SmolLM-135M", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-03T12:51:03Z
--- base_model: unsloth/SmolLM-135M library_name: transformers model_name: 6fbe1cf0-8feb-44d1-b103-5681ee13f2ec tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for 6fbe1cf0-8feb-44d1-b103-5681ee13f2ec This model is a fine-tuned version of [unsloth/SmolLM-135M](https://huggingface.co/unsloth/SmolLM-135M). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="FormlessAI/6fbe1cf0-8feb-44d1-b103-5681ee13f2ec", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/p0zhbu8b) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.0 - Transformers: 4.52.3 - Pytorch: 2.7.0+cu128 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
sapna-kumari-shah-hd-16k/Sapna.Shah.kumari.Viral.Video.Link.like
sapna-kumari-shah-hd-16k
2025-06-03T18:09:54Z
0
0
null
[ "region:us" ]
null
2025-06-03T18:09:46Z
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a> <a href="https://sdu.sk/uLf" rel="nofollow">โ–บโœ… ๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ (๐—ฆ๐—ถ๐—ด๐—ป ๐—จ๐—ฝ ๐˜๐—ผ ๐™๐™ช๐™ก๐™ก ๐—ช๐—ฎ๐˜๐—ฐ๐—ต ๐™‘๐™ž๐™™๐™š๐™คโค๏ธโค๏ธ)</a> <a href="https://sdu.sk/uLf" rel="nofollow">๐Ÿ”ด โžคโ–บโœ…๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐ฅ๐ข๐ง๐ค)</a>
ngocnamk3er/dsi_transformers_vault_t5_large_3_6
ngocnamk3er
2025-06-03T18:03:56Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-large", "base_model:finetune:google-t5/t5-large", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-03T12:49:11Z
--- library_name: transformers license: apache-2.0 base_model: t5-large tags: - generated_from_trainer model-index: - name: dsi_transformers_vault_t5_large_3_6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dsi_transformers_vault_t5_large_3_6 This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 ### Training results ### Framework versions - Transformers 4.45.2 - Pytorch 2.5.1+cu124 - Datasets 3.6.0 - Tokenizers 0.20.3
xrsula/MNLP_M3_mcqa_model_2
xrsula
2025-06-03T18:02:14Z
0
0
peft
[ "peft", "safetensors", "qwen3", "unsloth", "generated_from_trainer", "base_model:unsloth/Qwen3-0.6B-Base", "base_model:adapter:unsloth/Qwen3-0.6B-Base", "license:apache-2.0", "region:us" ]
null
2025-06-03T15:48:46Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen3-0.6B-Base tags: - unsloth - generated_from_trainer model-index: - name: MNLP_M3_mcqa_model_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MNLP_M3_mcqa_model_2 This model is a fine-tuned version of [unsloth/Qwen3-0.6B-Base](https://huggingface.co/unsloth/Qwen3-0.6B-Base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 6 ### Training results ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.21.0
drudrull/ppo-LunarLander-v2
drudrull
2025-06-03T18:01:28Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-06-03T18:01:08Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 261.53 +/- 25.43 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
DorianVH/q-FrozenLake-v1-4x4-noSlippery
DorianVH
2025-06-03T18:00:29Z
0
0
null
[ "FrozenLake-v1-4x4", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-06-03T18:00:26Z
--- tags: - FrozenLake-v1-4x4 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4 type: FrozenLake-v1-4x4 metrics: - type: mean_reward value: 0.69 +/- 0.46 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="DorianVH/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
gradientrouting-spar/task2_2d_random_common_words_positive_emotions_v2_5plants_proxy_20250603_174751
gradientrouting-spar
2025-06-03T17:59:53Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-03T17:57:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jobz-hunting-sajal-malik-18-vodeo/wATCH.jobz.hunting.sajal.malik.viral.video
jobz-hunting-sajal-malik-18-vodeo
2025-06-03T17:59:08Z
0
0
null
[ "region:us" ]
null
2025-06-03T17:59:00Z
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a> <a href="https://sdu.sk/uLf" rel="nofollow">โ–บโœ… ๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ (๐—ฆ๐—ถ๐—ด๐—ป ๐—จ๐—ฝ ๐˜๐—ผ ๐™๐™ช๐™ก๐™ก ๐—ช๐—ฎ๐˜๐—ฐ๐—ต ๐™‘๐™ž๐™™๐™š๐™คโค๏ธโค๏ธ)</a> <a href="https://sdu.sk/uLf" rel="nofollow">๐Ÿ”ด โžคโ–บโœ…๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐ฅ๐ข๐ง๐ค)</a>
Kijai/WanVideo_comfy
Kijai
2025-06-03T17:57:45Z
0
642
null
[ "region:us" ]
null
2025-02-25T17:54:17Z
Combined and quantized models for WanVideo, originating from here: https://huggingface.co/Wan-AI/ Can be used with: https://github.com/kijai/ComfyUI-WanVideoWrapper and ComfyUI native WanVideo nodes. Other model sources: TinyVAE from https://github.com/madebyollin/taehv SkyReels: https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9 WanVideoFun: https://huggingface.co/collections/alibaba-pai/wan21-fun-v11-680f514c89fe7b4df9d44f17 CausVid 14B: https://huggingface.co/lightx2v/Wan2.1-T2V-14B-CausVid CausVid 1.3B: https://huggingface.co/tianweiy/CausVid AccVideo: https://huggingface.co/aejion/AccVideo-WanX-T2V-14B Phantom: https://huggingface.co/bytedance-research/Phantom ATI: https://huggingface.co/bytedance-research/ATI --- CausVid LoRAs are experimental extractions from the CausVid finetunes, the aim with them is to benefit from the distillation in CausVid, rather than any actual causal inference. --- v1 = direct extraction, has adverse effects on motion and introduces flashing artifact at full strength. v1.5 = same as above, but without the first block which fixes the flashing at full strength. v2 = further pruned version with only attention layers and no first block, fixes flashing and retains motion better, needs more steps and can also benefit from cfg.
andito/nanoVLM
andito
2025-06-03T17:55:25Z
141
0
nanovlm
[ "nanovlm", "safetensors", "vision-language", "multimodal", "research", "image-text-to-text", "license:mit", "region:us" ]
image-text-to-text
2025-05-26T19:56:06Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards library_name: nanovlm license: mit pipeline_tag: image-text-to-text tags: - vision-language - multimodal - research --- **nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model. For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M. **Usage:** Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM. Follow the install instructions and run the following code: ```python from models.vision_language_model import VisionLanguageModel model = VisionLanguageModel.from_pretrained("andito/nanoVLM") ```
vidyc/tulu_dpo
vidyc
2025-06-03T17:55:19Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "trl", "dpo", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-03T10:12:43Z
--- library_name: transformers tags: - trl - dpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
StanfordAIMI/SRR-T5-SciFive
StanfordAIMI
2025-06-03T17:55:15Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:2506.00200", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-01-24T03:42:15Z
--- library_name: transformers tags: [] --- <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <div align="center"> <h1> Structuring Radiology Reports: Challenging LLMs with Lightweight Models </h1> </div> <p align="center"> ๐Ÿ“ <a href="https://arxiv.org/abs/2506.00200" target="_blank">Paper</a> โ€ข ๐Ÿค— <a href="https://huggingface.co/collections/StanfordAIMI/structuring-with-lightweight-models-683e9eb895d42e04112fad88" target="_blank">Hugging Face</a> โ€ข ๐Ÿงฉ <a href="https://github.com/jomoll/rad-report-structuring" target="_blank">Github</a> โ€ข ๐Ÿช„ <a href="https://stanford-aimi.github.io/structuring.html" target="_blank">Project</a> </p> <div align="center"> </div> ```python import torch from transformers import AutoModelForSeq2SeqLM, AutoTokenizer # step 1: Setup model_name = "StanfordAIMI/SRR-T5-SciFive" device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # step 2: Load Processor and Model model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(device) tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, padding_side="right", use_fast=False) model.eval() # step 3: Inference (example from MIMIC-CXR dataset) input_text = "CHEST RADIOGRAPH PERFORMED ON ___ COMPARISON: Prior exam from ___. CLINICAL HISTORY: Weakness, assess pneumonia. FINDINGS: Frontal and lateral views of the chest were provided. Midline sternotomy wires are again noted. The heart is poorly assessed, though remains enlarged. There are at least small bilateral pleural effusions. There may be mild interstitial edema. No pneumothorax. Bony structures are demineralized with kyphotic angulation in the lower T-spine again noted. IMPRESSION: Limited exam with small bilateral effusions, cardiomegaly, and possible mild interstitial edema." inputs = tokenizer(input_text, padding="max_length", truncation=True, max_length=512, return_tensors="pt") inputs["attention_mask"] = inputs["input_ids"].ne(tokenizer.pad_token_id) # Add attention mask input_ids = inputs['input_ids'].to(device) attention_mask=inputs["attention_mask"].to(device) generated_ids = model.generate( input_ids, attention_mask=attention_mask, max_new_tokens=286, min_new_tokens= 120,decoder_start_token_id=model.config.decoder_start_token_id, num_beams=5, early_stopping=True, max_length=None )[0] decoded = tokenizer.decode(generated_ids, skip_special_tokens=True) print(decoded) ``` ## โœ๏ธ Citation ``` @article{structuring-2025, title={Structuring Radiology Reports: Challenging LLMs with Lightweight Models}, author={Moll, Johannes and Fay, Louisa and Azhar, Asfandyar and Ostmeier, Sophie and Lueth, Tim and Gatidis, Sergios and Langlotz, Curtis and Delbrouck, Jean-Benoit}, journal={arXiv preprint arXiv:2506.00200}, url={https://arxiv.org/abs/2506.00200}, year={2025} } ```
davgauch/MNLP_M3_mcqa_structured_no_rat
davgauch
2025-06-03T17:54:50Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen3-0.6B-Base", "base_model:finetune:Qwen/Qwen3-0.6B-Base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-03T16:26:51Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen3-0.6B-Base tags: - generated_from_trainer model-index: - name: MNLP_M3_mcqa_structured_no_rat results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MNLP_M3_mcqa_structured_no_rat This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1124 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0937 | 1.0 | 703 | 0.0870 | | 0.0375 | 2.0 | 1406 | 0.1124 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu126 - Datasets 3.2.0 - Tokenizers 0.21.0
SuperRomanchik/tuchniy_zhab_RVC
SuperRomanchik
2025-06-03T17:54:36Z
0
0
null
[ "ru", "license:mit", "region:us" ]
null
2025-06-03T17:44:48Z
--- license: mit language: - ru --- ะญั‚ะพ ะผะพะดะตะปัŒ ะณะพะปะพัะฐ ะดะธะบั‚ะพั€ะฐ ั ะบะฐะฝะฐะปะฐ ะขัƒั‡ะฝั‹ะน ะถะฐะฑ ะฝะฐ Youtube. ะžะฑัƒั‡ะตะฝะฐ ะฝะฐ ะพะดะฝะพั‡ะฐัะพะฒะพะผ ั€ะพะปะธะบะต ะฒ 200 ัะฟะพั….
StanfordAIMI/SRR-BERT2BERT-RoBERTa-base
StanfordAIMI
2025-06-03T17:54:30Z
0
0
transformers
[ "transformers", "safetensors", "encoder-decoder", "text2text-generation", "arxiv:2506.00200", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-01-20T23:20:30Z
--- library_name: transformers tags: [] --- <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <div align="center"> <h1> Structuring Radiology Reports: Challenging LLMs with Lightweight Models </h1> </div> <p align="center"> ๐Ÿ“ <a href="https://arxiv.org/abs/2506.00200" target="_blank">Paper</a> โ€ข ๐Ÿค— <a href="https://huggingface.co/collections/StanfordAIMI/structuring-with-lightweight-models-683e9eb895d42e04112fad88" target="_blank">Hugging Face</a> โ€ข ๐Ÿงฉ <a href="https://github.com/jomoll/rad-report-structuring" target="_blank">Github</a> โ€ข ๐Ÿช„ <a href="https://stanford-aimi.github.io/structuring.html" target="_blank">Project</a> </p> <div align="center"> </div> ## ๐ŸŽฌ Get Started ```python import torch from transformers import EncoderDecoderModel, AutoTokenizer # step 1: Setup model_name = "StanfordAIMI/SRR-BERT2BERT-RoBERTa-base" device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # step 2: Load Processor and Model model = EncoderDecoderModel.from_pretrained(model_name).to(device) tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, padding_side="right", use_fast=False) model.config.decoder_start_token_id = tokenizer.cls_token_id model.config.bos_token_id = tokenizer.cls_token_id model.eval() # step 3: Inference (example from MIMIC-CXR dataset) input_text = "CHEST RADIOGRAPH PERFORMED ON ___ COMPARISON: Prior exam from ___. CLINICAL HISTORY: Weakness, assess pneumonia. FINDINGS: Frontal and lateral views of the chest were provided. Midline sternotomy wires are again noted. The heart is poorly assessed, though remains enlarged. There are at least small bilateral pleural effusions. There may be mild interstitial edema. No pneumothorax. Bony structures are demineralized with kyphotic angulation in the lower T-spine again noted. IMPRESSION: Limited exam with small bilateral effusions, cardiomegaly, and possible mild interstitial edema." inputs = tokenizer(input_text, padding="max_length", truncation=True, max_length=512, return_tensors="pt") inputs["attention_mask"] = inputs["input_ids"].ne(tokenizer.pad_token_id) # Add attention mask input_ids = inputs['input_ids'].to(device) attention_mask=inputs["attention_mask"].to(device) generated_ids = model.generate( input_ids, attention_mask=attention_mask, max_new_tokens=286, min_new_tokens= 120,decoder_start_token_id=model.config.decoder_start_token_id, num_beams=5, early_stopping=True, max_length=None )[0] decoded = tokenizer.decode(generated_ids, skip_special_tokens=True) print(decoded) ``` ## โœ๏ธ Citation ``` @article{structuring-2025, title={Structuring Radiology Reports: Challenging LLMs with Lightweight Models}, author={Moll, Johannes and Fay, Louisa and Azhar, Asfandyar and Ostmeier, Sophie and Lueth, Tim and Gatidis, Sergios and Langlotz, Curtis and Delbrouck, Jean-Benoit}, journal={arXiv preprint arXiv:2506.00200}, url={https://arxiv.org/abs/2506.00200}, year={2025} } ```
Monkey455/llama3-customerbot
Monkey455
2025-06-03T17:54:11Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "region:us" ]
null
2025-06-03T17:21:09Z
--- base_model: meta-llama/Meta-Llama-3-8B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
passesjamie/lucy_LoRA
passesjamie
2025-06-03T17:53:30Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-06-03T17:34:07Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: a photo of TOK human widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - passesjamie/lucy_LoRA <Gallery /> ## Model description These are passesjamie/lucy_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of TOK human to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](passesjamie/lucy_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
Stonewu777/q-Taxi-v3
Stonewu777
2025-06-03T17:53:29Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-06-03T17:53:26Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage model = load_from_hub(repo_id="Stonewu777/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
artificiallover0/hairy_man
artificiallover0
2025-06-03T17:50:10Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:apache-2.0", "region:us" ]
text-to-image
2025-06-03T17:41:25Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/20250529_1658_image.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: null license: apache-2.0 --- # ooiooi <Gallery /> ## Model description ![20250529_1649_image.png](https:&#x2F;&#x2F;cdn-uploads.huggingface.co&#x2F;production&#x2F;uploads&#x2F;680e27c88090729cdb6e9478&#x2F;J7VA_-I194xKWLk9TfbX3.png) ## Download model Weights for this model are available in Safetensors format. [Download](/artificiallover0/hairy_man/tree/main) them in the Files & versions tab.
jusjinuk/gemma-3-27b-it-2bit-GuidedQuant-LNQ
jusjinuk
2025-06-03T17:46:24Z
2
0
null
[ "pytorch", "gemma3", "arxiv:2505.07004", "base_model:google/gemma-3-27b-it", "base_model:quantized:google/gemma-3-27b-it", "license:mit", "region:us" ]
null
2025-06-02T02:22:36Z
--- base_model: - google/gemma-3-27b-it base_model_relation: quantized license: mit --- # Model Card - Base model: `google/gemma-3-27b-it` - Quantization method: LNQ with GuidedQuant Hessian - Target bit-width: 2 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction - num_groups (for GuidedQuant Hessian): 1 # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
wATCH-Cikgu-cctv-wiring-video-clip/Pautan.Asal.18.video.Cikgu.Fadilah.dan.abang.cctv.wiring.video.viral.malaysia
wATCH-Cikgu-cctv-wiring-video-clip
2025-06-03T17:44:49Z
0
0
null
[ "region:us" ]
null
2025-06-03T17:44:40Z
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a> <a href="https://sdu.sk/uLf" rel="nofollow">โ–บโœ… ๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ (๐—ฆ๐—ถ๐—ด๐—ป ๐—จ๐—ฝ ๐˜๐—ผ ๐™๐™ช๐™ก๐™ก ๐—ช๐—ฎ๐˜๐—ฐ๐—ต ๐™‘๐™ž๐™™๐™š๐™คโค๏ธโค๏ธ)</a> <a href="https://sdu.sk/uLf" rel="nofollow">๐Ÿ”ด โžคโ–บโœ…๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐ฅ๐ข๐ง๐ค)</a>
OrcaDB/e5-large
OrcaDB
2025-06-03T17:43:17Z
36,146
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "feature-extraction", "mteb", "transformers", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "arxiv:2402.05672", "arxiv:2401.00368", "arxiv:2104.08663", "arxiv:2210.07316", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2025-04-25T19:25:13Z
--- tags: - mteb - sentence-transformers - transformers model-index: - name: multilingual-e5-large-instruct results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 76.23880597014924 - type: ap value: 39.07351965022687 - type: f1 value: 70.04836733862683 - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (de) config: de split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 66.71306209850107 - type: ap value: 79.01499914759529 - type: f1 value: 64.81951817560703 - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en-ext) config: en-ext split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 73.85307346326837 - type: ap value: 22.447519885878737 - type: f1 value: 61.0162730745633 - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (ja) config: ja split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 76.04925053533191 - type: ap value: 23.44983217128922 - type: f1 value: 62.5723230907759 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 96.28742500000001 - type: ap value: 94.8449918887462 - type: f1 value: 96.28680923610432 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 56.716 - type: f1 value: 55.76510398266401 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (de) config: de split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 52.99999999999999 - type: f1 value: 52.00829994765178 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (es) config: es split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 48.806000000000004 - type: f1 value: 48.082345914983634 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (fr) config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 48.507999999999996 - type: f1 value: 47.68752844642045 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (ja) config: ja split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.709999999999994 - type: f1 value: 47.05870376637181 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (zh) config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 44.662000000000006 - type: f1 value: 43.42371965372771 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 31.721 - type: map_at_10 value: 49.221 - type: map_at_100 value: 49.884 - type: map_at_1000 value: 49.888 - type: map_at_3 value: 44.31 - type: map_at_5 value: 47.276 - type: mrr_at_1 value: 32.432 - type: mrr_at_10 value: 49.5 - type: mrr_at_100 value: 50.163000000000004 - type: mrr_at_1000 value: 50.166 - type: mrr_at_3 value: 44.618 - type: mrr_at_5 value: 47.541 - type: ndcg_at_1 value: 31.721 - type: ndcg_at_10 value: 58.384 - type: ndcg_at_100 value: 61.111000000000004 - type: ndcg_at_1000 value: 61.187999999999995 - type: ndcg_at_3 value: 48.386 - type: ndcg_at_5 value: 53.708999999999996 - type: precision_at_1 value: 31.721 - type: precision_at_10 value: 8.741 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 20.057 - type: precision_at_5 value: 14.609 - type: recall_at_1 value: 31.721 - type: recall_at_10 value: 87.411 - type: recall_at_100 value: 99.075 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 60.171 - type: recall_at_5 value: 73.044 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 46.40419580759799 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 40.48593255007969 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 63.889179122289995 - type: mrr value: 77.61146286769556 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 88.15075203727929 - type: cos_sim_spearman value: 86.9622224570873 - type: euclidean_pearson value: 86.70473853624121 - type: euclidean_spearman value: 86.9622224570873 - type: manhattan_pearson value: 86.21089380980065 - type: manhattan_spearman value: 86.75318154937008 - task: type: BitextMining dataset: type: mteb/bucc-bitext-mining name: MTEB BUCC (de-en) config: de-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.65553235908142 - type: f1 value: 99.60681976339595 - type: precision value: 99.58246346555325 - type: recall value: 99.65553235908142 - task: type: BitextMining dataset: type: mteb/bucc-bitext-mining name: MTEB BUCC (fr-en) config: fr-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.26260180497468 - type: f1 value: 99.14520507740848 - type: precision value: 99.08650671362535 - type: recall value: 99.26260180497468 - task: type: BitextMining dataset: type: mteb/bucc-bitext-mining name: MTEB BUCC (ru-en) config: ru-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 98.07412538967787 - type: f1 value: 97.86629719431936 - type: precision value: 97.76238309664012 - type: recall value: 98.07412538967787 - task: type: BitextMining dataset: type: mteb/bucc-bitext-mining name: MTEB BUCC (zh-en) config: zh-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.42074776197998 - type: f1 value: 99.38564156573635 - type: precision value: 99.36808846761454 - type: recall value: 99.42074776197998 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 85.73376623376623 - type: f1 value: 85.68480707214599 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 40.935218072113855 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 36.276389017675264 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.764166666666668 - type: map_at_10 value: 37.298166666666674 - type: map_at_100 value: 38.530166666666666 - type: map_at_1000 value: 38.64416666666667 - type: map_at_3 value: 34.484833333333334 - type: map_at_5 value: 36.0385 - type: mrr_at_1 value: 32.93558333333333 - type: mrr_at_10 value: 41.589749999999995 - type: mrr_at_100 value: 42.425333333333334 - type: mrr_at_1000 value: 42.476333333333336 - type: mrr_at_3 value: 39.26825 - type: mrr_at_5 value: 40.567083333333336 - type: ndcg_at_1 value: 32.93558333333333 - type: ndcg_at_10 value: 42.706583333333334 - type: ndcg_at_100 value: 47.82483333333333 - type: ndcg_at_1000 value: 49.95733333333334 - type: ndcg_at_3 value: 38.064750000000004 - type: ndcg_at_5 value: 40.18158333333333 - type: precision_at_1 value: 32.93558333333333 - type: precision_at_10 value: 7.459833333333334 - type: precision_at_100 value: 1.1830833333333335 - type: precision_at_1000 value: 0.15608333333333332 - type: precision_at_3 value: 17.5235 - type: precision_at_5 value: 12.349833333333333 - type: recall_at_1 value: 27.764166666666668 - type: recall_at_10 value: 54.31775 - type: recall_at_100 value: 76.74350000000001 - type: recall_at_1000 value: 91.45208333333332 - type: recall_at_3 value: 41.23425 - type: recall_at_5 value: 46.73983333333334 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 12.969 - type: map_at_10 value: 21.584999999999997 - type: map_at_100 value: 23.3 - type: map_at_1000 value: 23.5 - type: map_at_3 value: 18.218999999999998 - type: map_at_5 value: 19.983 - type: mrr_at_1 value: 29.316 - type: mrr_at_10 value: 40.033 - type: mrr_at_100 value: 40.96 - type: mrr_at_1000 value: 41.001 - type: mrr_at_3 value: 37.123 - type: mrr_at_5 value: 38.757999999999996 - type: ndcg_at_1 value: 29.316 - type: ndcg_at_10 value: 29.858 - type: ndcg_at_100 value: 36.756 - type: ndcg_at_1000 value: 40.245999999999995 - type: ndcg_at_3 value: 24.822 - type: ndcg_at_5 value: 26.565 - type: precision_at_1 value: 29.316 - type: precision_at_10 value: 9.186 - type: precision_at_100 value: 1.6549999999999998 - type: precision_at_1000 value: 0.22999999999999998 - type: precision_at_3 value: 18.436 - type: precision_at_5 value: 13.876 - type: recall_at_1 value: 12.969 - type: recall_at_10 value: 35.142 - type: recall_at_100 value: 59.143 - type: recall_at_1000 value: 78.594 - type: recall_at_3 value: 22.604 - type: recall_at_5 value: 27.883000000000003 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 8.527999999999999 - type: map_at_10 value: 17.974999999999998 - type: map_at_100 value: 25.665 - type: map_at_1000 value: 27.406000000000002 - type: map_at_3 value: 13.017999999999999 - type: map_at_5 value: 15.137 - type: mrr_at_1 value: 62.5 - type: mrr_at_10 value: 71.891 - type: mrr_at_100 value: 72.294 - type: mrr_at_1000 value: 72.296 - type: mrr_at_3 value: 69.958 - type: mrr_at_5 value: 71.121 - type: ndcg_at_1 value: 50.875 - type: ndcg_at_10 value: 38.36 - type: ndcg_at_100 value: 44.235 - type: ndcg_at_1000 value: 52.154 - type: ndcg_at_3 value: 43.008 - type: ndcg_at_5 value: 40.083999999999996 - type: precision_at_1 value: 62.5 - type: precision_at_10 value: 30.0 - type: precision_at_100 value: 10.038 - type: precision_at_1000 value: 2.0869999999999997 - type: precision_at_3 value: 46.833000000000006 - type: precision_at_5 value: 38.800000000000004 - type: recall_at_1 value: 8.527999999999999 - type: recall_at_10 value: 23.828 - type: recall_at_100 value: 52.322 - type: recall_at_1000 value: 77.143 - type: recall_at_3 value: 14.136000000000001 - type: recall_at_5 value: 17.761 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 51.51 - type: f1 value: 47.632159862049896 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 60.734 - type: map_at_10 value: 72.442 - type: map_at_100 value: 72.735 - type: map_at_1000 value: 72.75 - type: map_at_3 value: 70.41199999999999 - type: map_at_5 value: 71.80499999999999 - type: mrr_at_1 value: 65.212 - type: mrr_at_10 value: 76.613 - type: mrr_at_100 value: 76.79899999999999 - type: mrr_at_1000 value: 76.801 - type: mrr_at_3 value: 74.8 - type: mrr_at_5 value: 76.12400000000001 - type: ndcg_at_1 value: 65.212 - type: ndcg_at_10 value: 77.988 - type: ndcg_at_100 value: 79.167 - type: ndcg_at_1000 value: 79.452 - type: ndcg_at_3 value: 74.362 - type: ndcg_at_5 value: 76.666 - type: precision_at_1 value: 65.212 - type: precision_at_10 value: 10.003 - type: precision_at_100 value: 1.077 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 29.518 - type: precision_at_5 value: 19.016 - type: recall_at_1 value: 60.734 - type: recall_at_10 value: 90.824 - type: recall_at_100 value: 95.71600000000001 - type: recall_at_1000 value: 97.577 - type: recall_at_3 value: 81.243 - type: recall_at_5 value: 86.90299999999999 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 23.845 - type: map_at_10 value: 39.281 - type: map_at_100 value: 41.422 - type: map_at_1000 value: 41.593 - type: map_at_3 value: 34.467 - type: map_at_5 value: 37.017 - type: mrr_at_1 value: 47.531 - type: mrr_at_10 value: 56.204 - type: mrr_at_100 value: 56.928999999999995 - type: mrr_at_1000 value: 56.962999999999994 - type: mrr_at_3 value: 54.115 - type: mrr_at_5 value: 55.373000000000005 - type: ndcg_at_1 value: 47.531 - type: ndcg_at_10 value: 47.711999999999996 - type: ndcg_at_100 value: 54.510999999999996 - type: ndcg_at_1000 value: 57.103 - type: ndcg_at_3 value: 44.145 - type: ndcg_at_5 value: 45.032 - type: precision_at_1 value: 47.531 - type: precision_at_10 value: 13.194 - type: precision_at_100 value: 2.045 - type: precision_at_1000 value: 0.249 - type: precision_at_3 value: 29.424 - type: precision_at_5 value: 21.451 - type: recall_at_1 value: 23.845 - type: recall_at_10 value: 54.967 - type: recall_at_100 value: 79.11399999999999 - type: recall_at_1000 value: 94.56700000000001 - type: recall_at_3 value: 40.256 - type: recall_at_5 value: 46.215 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 37.819 - type: map_at_10 value: 60.889 - type: map_at_100 value: 61.717999999999996 - type: map_at_1000 value: 61.778 - type: map_at_3 value: 57.254000000000005 - type: map_at_5 value: 59.541 - type: mrr_at_1 value: 75.638 - type: mrr_at_10 value: 82.173 - type: mrr_at_100 value: 82.362 - type: mrr_at_1000 value: 82.37 - type: mrr_at_3 value: 81.089 - type: mrr_at_5 value: 81.827 - type: ndcg_at_1 value: 75.638 - type: ndcg_at_10 value: 69.317 - type: ndcg_at_100 value: 72.221 - type: ndcg_at_1000 value: 73.382 - type: ndcg_at_3 value: 64.14 - type: ndcg_at_5 value: 67.07600000000001 - type: precision_at_1 value: 75.638 - type: precision_at_10 value: 14.704999999999998 - type: precision_at_100 value: 1.698 - type: precision_at_1000 value: 0.185 - type: precision_at_3 value: 41.394999999999996 - type: precision_at_5 value: 27.162999999999997 - type: recall_at_1 value: 37.819 - type: recall_at_10 value: 73.52499999999999 - type: recall_at_100 value: 84.875 - type: recall_at_1000 value: 92.559 - type: recall_at_3 value: 62.092999999999996 - type: recall_at_5 value: 67.907 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 94.60079999999999 - type: ap value: 92.67396345347356 - type: f1 value: 94.5988098167121 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 21.285 - type: map_at_10 value: 33.436 - type: map_at_100 value: 34.63 - type: map_at_1000 value: 34.681 - type: map_at_3 value: 29.412 - type: map_at_5 value: 31.715 - type: mrr_at_1 value: 21.848 - type: mrr_at_10 value: 33.979 - type: mrr_at_100 value: 35.118 - type: mrr_at_1000 value: 35.162 - type: mrr_at_3 value: 30.036 - type: mrr_at_5 value: 32.298 - type: ndcg_at_1 value: 21.862000000000002 - type: ndcg_at_10 value: 40.43 - type: ndcg_at_100 value: 46.17 - type: ndcg_at_1000 value: 47.412 - type: ndcg_at_3 value: 32.221 - type: ndcg_at_5 value: 36.332 - type: precision_at_1 value: 21.862000000000002 - type: precision_at_10 value: 6.491 - type: precision_at_100 value: 0.935 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 13.744 - type: precision_at_5 value: 10.331999999999999 - type: recall_at_1 value: 21.285 - type: recall_at_10 value: 62.083 - type: recall_at_100 value: 88.576 - type: recall_at_1000 value: 98.006 - type: recall_at_3 value: 39.729 - type: recall_at_5 value: 49.608000000000004 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.92612859097127 - type: f1 value: 93.82370333372853 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (de) config: de split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.67681036911807 - type: f1 value: 92.14191382411472 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (es) config: es split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.26817878585723 - type: f1 value: 91.92824250337878 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (fr) config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.96554963983714 - type: f1 value: 90.02859329630792 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (hi) config: hi split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 90.02509860164935 - type: f1 value: 89.30665159182062 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (th) config: th split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 87.55515370705244 - type: f1 value: 87.94449232331907 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 82.4623803009576 - type: f1 value: 66.06738378772725 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (de) config: de split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 79.3716539870386 - type: f1 value: 60.37614033396853 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (es) config: es split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 80.34022681787857 - type: f1 value: 58.302008026952 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (fr) config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 76.72095208268087 - type: f1 value: 59.64524724009049 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (hi) config: hi split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 77.87020437432773 - type: f1 value: 57.80202694670567 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (th) config: th split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 77.73598553345387 - type: f1 value: 58.19628250675031 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (af) config: af split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.6630800268998 - type: f1 value: 65.00996668051691 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (am) config: am split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.7128446536651 - type: f1 value: 57.95860594874963 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ar) config: ar split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.61129791526563 - type: f1 value: 59.75328290206483 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (az) config: az split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.00134498991257 - type: f1 value: 67.0230483991802 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (bn) config: bn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.54068594485541 - type: f1 value: 65.54604628946976 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (cy) config: cy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.032952252858095 - type: f1 value: 58.715741857057104 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (da) config: da split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.80901143241427 - type: f1 value: 68.33963989243877 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (de) config: de split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.47141896435777 - type: f1 value: 69.56765020308262 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (el) config: el split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.2373907195696 - type: f1 value: 69.04529836036467 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 77.05783456624076 - type: f1 value: 74.69430584708174 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (es) config: es split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.82111634162744 - type: f1 value: 70.77228952803762 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (fa) config: fa split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 74.25353059852051 - type: f1 value: 71.05310103416411 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (fi) config: fi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.28648285137861 - type: f1 value: 69.08020473732226 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (fr) config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.31540013449899 - type: f1 value: 70.9426355465791 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (he) config: he split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.2151983860121 - type: f1 value: 67.52541755908858 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (hi) config: hi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.58372562205784 - type: f1 value: 69.49769064229827 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (hu) config: hu split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.9233355749832 - type: f1 value: 69.36311548259593 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (hy) config: hy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.07330195023538 - type: f1 value: 64.99882022345572 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (id) config: id split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.62273032952253 - type: f1 value: 70.6394885471001 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (is) config: is split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.77000672494957 - type: f1 value: 62.9368944815065 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (it) config: it split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.453261600538 - type: f1 value: 70.85069934666681 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ja) config: ja split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 74.6906523201076 - type: f1 value: 72.03249740074217 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (jv) config: jv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.03631472763953 - type: f1 value: 59.3165215571852 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ka) config: ka split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.913920645595155 - type: f1 value: 57.367337711611285 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (km) config: km split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 54.42837928715535 - type: f1 value: 52.60527294970906 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (kn) config: kn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.33490248823135 - type: f1 value: 63.213340969404065 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ko) config: ko split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.58507061197041 - type: f1 value: 68.40256628040486 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (lv) config: lv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.11230665770006 - type: f1 value: 66.44863577842305 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ml) config: ml split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.70073974445192 - type: f1 value: 67.21291337273702 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (mn) config: mn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.43913920645595 - type: f1 value: 64.09838087422806 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ms) config: ms split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.80026899798251 - type: f1 value: 68.76986742962444 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (my) config: my split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.78816408876934 - type: f1 value: 62.18781873428972 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (nb) config: nb split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.6577000672495 - type: f1 value: 68.75171511133003 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (nl) config: nl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 74.42501681237391 - type: f1 value: 71.18434963451544 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (pl) config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.64828513786146 - type: f1 value: 70.67741914007422 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (pt) config: pt split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.62811028917284 - type: f1 value: 71.36402039740959 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ro) config: ro split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.88634835238736 - type: f1 value: 69.23701923480677 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ru) config: ru split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 74.15938130464022 - type: f1 value: 71.87792218993388 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (sl) config: sl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.96301277740416 - type: f1 value: 67.29584200202983 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (sq) config: sq split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.49562878278412 - type: f1 value: 66.91716685679431 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (sv) config: sv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 74.6805648957633 - type: f1 value: 72.02723592594374 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (sw) config: sw split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.00605245460659 - type: f1 value: 60.16716669482932 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ta) config: ta split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.90988567585742 - type: f1 value: 63.99405488777784 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (te) config: te split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.62273032952253 - type: f1 value: 65.17213906909481 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (th) config: th split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.50907868190988 - type: f1 value: 69.15165697194853 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (tl) config: tl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.30733019502352 - type: f1 value: 66.69024007380474 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (tr) config: tr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.24277067921989 - type: f1 value: 68.80515408492947 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ur) config: ur split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.49831876260929 - type: f1 value: 64.83778567111116 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (vi) config: vi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.28782784129119 - type: f1 value: 69.3294186700733 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (zh-CN) config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.315400134499 - type: f1 value: 71.22674385243207 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (zh-TW) config: zh-TW split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.37794216543377 - type: f1 value: 68.96962492838232 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (af) config: af split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.33557498318764 - type: f1 value: 72.28949738478356 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (am) config: am split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 65.84398117014123 - type: f1 value: 64.71026362091463 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ar) config: ar split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.76462676529925 - type: f1 value: 69.8229667407667 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (az) config: az split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.02420981842636 - type: f1 value: 71.76576384895898 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (bn) config: bn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.7572293207801 - type: f1 value: 72.76840765295256 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (cy) config: cy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.02286482851379 - type: f1 value: 66.17237947327872 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (da) config: da split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.60928043039678 - type: f1 value: 77.27094731234773 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (de) config: de split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.68325487558843 - type: f1 value: 77.97530399082261 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (el) config: el split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.13315400134498 - type: f1 value: 75.97558584796424 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 80.47410894418292 - type: f1 value: 80.52244841473792 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (es) config: es split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.9670477471419 - type: f1 value: 77.37318805793146 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (fa) config: fa split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 78.09683927370544 - type: f1 value: 77.69773737430847 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (fi) config: fi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.20847343644922 - type: f1 value: 75.17071738727348 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (fr) config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.07464694014796 - type: f1 value: 77.16136207698571 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (he) config: he split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.53396099529255 - type: f1 value: 73.58296404484122 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (hi) config: hi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.75319435104237 - type: f1 value: 75.24674707850833 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (hu) config: hu split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.0948217888366 - type: f1 value: 76.47559490205028 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (hy) config: hy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.07599193006052 - type: f1 value: 70.76028043093511 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (id) config: id split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.10490921318089 - type: f1 value: 77.01215275283272 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (is) config: is split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.25756556825824 - type: f1 value: 70.20605314648762 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (it) config: it split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.08137188971082 - type: f1 value: 77.3899269057439 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ja) config: ja split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 79.35440484196369 - type: f1 value: 79.58964690002772 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (jv) config: jv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.42299932750504 - type: f1 value: 68.07844356925413 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ka) config: ka split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.15669132481507 - type: f1 value: 65.89383352608513 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (km) config: km split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.11432414256894 - type: f1 value: 57.69910594559806 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (kn) config: kn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.24747814391392 - type: f1 value: 70.42455553830918 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ko) config: ko split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.46267652992603 - type: f1 value: 76.8854559308316 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (lv) config: lv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.24815063887021 - type: f1 value: 72.77805034658074 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ml) config: ml split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.11566913248151 - type: f1 value: 73.86147988001356 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (mn) config: mn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.0168123739072 - type: f1 value: 69.38515920054571 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ms) config: ms split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.41156691324814 - type: f1 value: 73.43474953408237 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (my) config: my split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.39609952925353 - type: f1 value: 67.29731681109291 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (nb) config: nb split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.20914593140552 - type: f1 value: 77.07066497935367 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (nl) config: nl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 78.52387357094821 - type: f1 value: 78.5259569473291 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (pl) config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.6913248150639 - type: f1 value: 76.91201656350455 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (pt) config: pt split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.1217215870881 - type: f1 value: 77.41179937912504 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ro) config: ro split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.25891055817083 - type: f1 value: 75.8089244542887 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ru) config: ru split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.70679219905851 - type: f1 value: 78.21459594517711 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (sl) config: sl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.83523873570948 - type: f1 value: 74.86847028401978 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (sq) config: sq split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.71755211835911 - type: f1 value: 74.0214326485662 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (sv) config: sv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 79.06523201075991 - type: f1 value: 79.10545620325138 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (sw) config: sw split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.91862811028918 - type: f1 value: 66.50386121217983 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ta) config: ta split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.93140551445865 - type: f1 value: 70.755435928495 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (te) config: te split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.40753194351042 - type: f1 value: 71.61816115782923 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (th) config: th split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.1815736381977 - type: f1 value: 75.08016717887205 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (tl) config: tl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.86482851378614 - type: f1 value: 72.39521180006291 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (tr) config: tr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.46940147948891 - type: f1 value: 76.70044085362349 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ur) config: ur split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.89307330195024 - type: f1 value: 71.5721825332298 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (vi) config: vi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.7511768661735 - type: f1 value: 75.17918654541515 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (zh-CN) config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 78.69535978480162 - type: f1 value: 78.90019070153316 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (zh-TW) config: zh-TW split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.45729657027572 - type: f1 value: 76.19578371794672 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 36.92715354123554 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 35.53536244162518 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 33.08507884504006 - type: mrr value: 34.32436977159129 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.935 - type: map_at_10 value: 13.297 - type: map_at_100 value: 16.907 - type: map_at_1000 value: 18.391 - type: map_at_3 value: 9.626999999999999 - type: map_at_5 value: 11.190999999999999 - type: mrr_at_1 value: 46.129999999999995 - type: mrr_at_10 value: 54.346000000000004 - type: mrr_at_100 value: 55.067 - type: mrr_at_1000 value: 55.1 - type: mrr_at_3 value: 51.961 - type: mrr_at_5 value: 53.246 - type: ndcg_at_1 value: 44.118 - type: ndcg_at_10 value: 35.534 - type: ndcg_at_100 value: 32.946999999999996 - type: ndcg_at_1000 value: 41.599000000000004 - type: ndcg_at_3 value: 40.25 - type: ndcg_at_5 value: 37.978 - type: precision_at_1 value: 46.129999999999995 - type: precision_at_10 value: 26.842 - type: precision_at_100 value: 8.427 - type: precision_at_1000 value: 2.128 - type: precision_at_3 value: 37.977 - type: precision_at_5 value: 32.879000000000005 - type: recall_at_1 value: 5.935 - type: recall_at_10 value: 17.211000000000002 - type: recall_at_100 value: 34.33 - type: recall_at_1000 value: 65.551 - type: recall_at_3 value: 10.483 - type: recall_at_5 value: 13.078999999999999 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 35.231 - type: map_at_10 value: 50.202000000000005 - type: map_at_100 value: 51.154999999999994 - type: map_at_1000 value: 51.181 - type: map_at_3 value: 45.774 - type: map_at_5 value: 48.522 - type: mrr_at_1 value: 39.687 - type: mrr_at_10 value: 52.88 - type: mrr_at_100 value: 53.569 - type: mrr_at_1000 value: 53.58500000000001 - type: mrr_at_3 value: 49.228 - type: mrr_at_5 value: 51.525 - type: ndcg_at_1 value: 39.687 - type: ndcg_at_10 value: 57.754000000000005 - type: ndcg_at_100 value: 61.597 - type: ndcg_at_1000 value: 62.18900000000001 - type: ndcg_at_3 value: 49.55 - type: ndcg_at_5 value: 54.11899999999999 - type: precision_at_1 value: 39.687 - type: precision_at_10 value: 9.313 - type: precision_at_100 value: 1.146 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 22.229 - type: precision_at_5 value: 15.939 - type: recall_at_1 value: 35.231 - type: recall_at_10 value: 78.083 - type: recall_at_100 value: 94.42099999999999 - type: recall_at_1000 value: 98.81 - type: recall_at_3 value: 57.047000000000004 - type: recall_at_5 value: 67.637 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 71.241 - type: map_at_10 value: 85.462 - type: map_at_100 value: 86.083 - type: map_at_1000 value: 86.09700000000001 - type: map_at_3 value: 82.49499999999999 - type: map_at_5 value: 84.392 - type: mrr_at_1 value: 82.09 - type: mrr_at_10 value: 88.301 - type: mrr_at_100 value: 88.383 - type: mrr_at_1000 value: 88.384 - type: mrr_at_3 value: 87.37 - type: mrr_at_5 value: 88.035 - type: ndcg_at_1 value: 82.12 - type: ndcg_at_10 value: 89.149 - type: ndcg_at_100 value: 90.235 - type: ndcg_at_1000 value: 90.307 - type: ndcg_at_3 value: 86.37599999999999 - type: ndcg_at_5 value: 87.964 - type: precision_at_1 value: 82.12 - type: precision_at_10 value: 13.56 - type: precision_at_100 value: 1.539 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.88 - type: precision_at_5 value: 24.92 - type: recall_at_1 value: 71.241 - type: recall_at_10 value: 96.128 - type: recall_at_100 value: 99.696 - type: recall_at_1000 value: 99.994 - type: recall_at_3 value: 88.181 - type: recall_at_5 value: 92.694 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 56.59757799655151 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 64.27391998854624 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 4.243 - type: map_at_10 value: 10.965 - type: map_at_100 value: 12.934999999999999 - type: map_at_1000 value: 13.256 - type: map_at_3 value: 7.907 - type: map_at_5 value: 9.435 - type: mrr_at_1 value: 20.9 - type: mrr_at_10 value: 31.849 - type: mrr_at_100 value: 32.964 - type: mrr_at_1000 value: 33.024 - type: mrr_at_3 value: 28.517 - type: mrr_at_5 value: 30.381999999999998 - type: ndcg_at_1 value: 20.9 - type: ndcg_at_10 value: 18.723 - type: ndcg_at_100 value: 26.384999999999998 - type: ndcg_at_1000 value: 32.114 - type: ndcg_at_3 value: 17.753 - type: ndcg_at_5 value: 15.558 - type: precision_at_1 value: 20.9 - type: precision_at_10 value: 9.8 - type: precision_at_100 value: 2.078 - type: precision_at_1000 value: 0.345 - type: precision_at_3 value: 16.900000000000002 - type: precision_at_5 value: 13.88 - type: recall_at_1 value: 4.243 - type: recall_at_10 value: 19.885 - type: recall_at_100 value: 42.17 - type: recall_at_1000 value: 70.12 - type: recall_at_3 value: 10.288 - type: recall_at_5 value: 14.072000000000001 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 85.84209174935282 - type: cos_sim_spearman value: 81.73248048438833 - type: euclidean_pearson value: 83.02810070308149 - type: euclidean_spearman value: 81.73248295679514 - type: manhattan_pearson value: 82.95368060376002 - type: manhattan_spearman value: 81.60277910998718 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 88.52628804556943 - type: cos_sim_spearman value: 82.5713913555672 - type: euclidean_pearson value: 85.8796774746988 - type: euclidean_spearman value: 82.57137506803424 - type: manhattan_pearson value: 85.79671002960058 - type: manhattan_spearman value: 82.49445981618027 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 86.23682503505542 - type: cos_sim_spearman value: 87.15008956711806 - type: euclidean_pearson value: 86.79805401524959 - type: euclidean_spearman value: 87.15008956711806 - type: manhattan_pearson value: 86.65298502699244 - type: manhattan_spearman value: 86.97677821948562 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 85.63370304677802 - type: cos_sim_spearman value: 84.97105553540318 - type: euclidean_pearson value: 85.28896108687721 - type: euclidean_spearman value: 84.97105553540318 - type: manhattan_pearson value: 85.09663190337331 - type: manhattan_spearman value: 84.79126831644619 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 90.2614838800733 - type: cos_sim_spearman value: 91.0509162991835 - type: euclidean_pearson value: 90.33098317533373 - type: euclidean_spearman value: 91.05091625871644 - type: manhattan_pearson value: 90.26250435151107 - type: manhattan_spearman value: 90.97999594417519 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 85.80480973335091 - type: cos_sim_spearman value: 87.313695492969 - type: euclidean_pearson value: 86.49267251576939 - type: euclidean_spearman value: 87.313695492969 - type: manhattan_pearson value: 86.44019901831935 - type: manhattan_spearman value: 87.24205395460392 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 90.05662789380672 - type: cos_sim_spearman value: 90.02759424426651 - type: euclidean_pearson value: 90.4042483422981 - type: euclidean_spearman value: 90.02759424426651 - type: manhattan_pearson value: 90.51446975000226 - type: manhattan_spearman value: 90.08832889933616 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 67.5975528273532 - type: cos_sim_spearman value: 67.62969861411354 - type: euclidean_pearson value: 69.224275734323 - type: euclidean_spearman value: 67.62969861411354 - type: manhattan_pearson value: 69.3761447059927 - type: manhattan_spearman value: 67.90921005611467 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 87.11244327231684 - type: cos_sim_spearman value: 88.37902438979035 - type: euclidean_pearson value: 87.86054279847336 - type: euclidean_spearman value: 88.37902438979035 - type: manhattan_pearson value: 87.77257757320378 - type: manhattan_spearman value: 88.25208966098123 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 85.87174608143563 - type: mrr value: 96.12836872640794 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 57.760999999999996 - type: map_at_10 value: 67.258 - type: map_at_100 value: 67.757 - type: map_at_1000 value: 67.78800000000001 - type: map_at_3 value: 64.602 - type: map_at_5 value: 65.64 - type: mrr_at_1 value: 60.667 - type: mrr_at_10 value: 68.441 - type: mrr_at_100 value: 68.825 - type: mrr_at_1000 value: 68.853 - type: mrr_at_3 value: 66.444 - type: mrr_at_5 value: 67.26100000000001 - type: ndcg_at_1 value: 60.667 - type: ndcg_at_10 value: 71.852 - type: ndcg_at_100 value: 73.9 - type: ndcg_at_1000 value: 74.628 - type: ndcg_at_3 value: 67.093 - type: ndcg_at_5 value: 68.58 - type: precision_at_1 value: 60.667 - type: precision_at_10 value: 9.6 - type: precision_at_100 value: 1.0670000000000002 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 26.111 - type: precision_at_5 value: 16.733 - type: recall_at_1 value: 57.760999999999996 - type: recall_at_10 value: 84.967 - type: recall_at_100 value: 93.833 - type: recall_at_1000 value: 99.333 - type: recall_at_3 value: 71.589 - type: recall_at_5 value: 75.483 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.66633663366336 - type: cos_sim_ap value: 91.17685358899108 - type: cos_sim_f1 value: 82.16818642350559 - type: cos_sim_precision value: 83.26488706365504 - type: cos_sim_recall value: 81.10000000000001 - type: dot_accuracy value: 99.66633663366336 - type: dot_ap value: 91.17663411119032 - type: dot_f1 value: 82.16818642350559 - type: dot_precision value: 83.26488706365504 - type: dot_recall value: 81.10000000000001 - type: euclidean_accuracy value: 99.66633663366336 - type: euclidean_ap value: 91.17685189882275 - type: euclidean_f1 value: 82.16818642350559 - type: euclidean_precision value: 83.26488706365504 - type: euclidean_recall value: 81.10000000000001 - type: manhattan_accuracy value: 99.66633663366336 - type: manhattan_ap value: 91.2241619496737 - type: manhattan_f1 value: 82.20472440944883 - type: manhattan_precision value: 86.51933701657458 - type: manhattan_recall value: 78.3 - type: max_accuracy value: 99.66633663366336 - type: max_ap value: 91.2241619496737 - type: max_f1 value: 82.20472440944883 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 66.85101268897951 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 42.461184054706905 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 51.44542568873886 - type: mrr value: 52.33656151854681 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.75982974997539 - type: cos_sim_spearman value: 30.385405026539914 - type: dot_pearson value: 30.75982433546523 - type: dot_spearman value: 30.385405026539914 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.22799999999999998 - type: map_at_10 value: 2.064 - type: map_at_100 value: 13.056000000000001 - type: map_at_1000 value: 31.747999999999998 - type: map_at_3 value: 0.67 - type: map_at_5 value: 1.097 - type: mrr_at_1 value: 90.0 - type: mrr_at_10 value: 94.667 - type: mrr_at_100 value: 94.667 - type: mrr_at_1000 value: 94.667 - type: mrr_at_3 value: 94.667 - type: mrr_at_5 value: 94.667 - type: ndcg_at_1 value: 86.0 - type: ndcg_at_10 value: 82.0 - type: ndcg_at_100 value: 64.307 - type: ndcg_at_1000 value: 57.023999999999994 - type: ndcg_at_3 value: 85.816 - type: ndcg_at_5 value: 84.904 - type: precision_at_1 value: 90.0 - type: precision_at_10 value: 85.8 - type: precision_at_100 value: 66.46 - type: precision_at_1000 value: 25.202 - type: precision_at_3 value: 90.0 - type: precision_at_5 value: 89.2 - type: recall_at_1 value: 0.22799999999999998 - type: recall_at_10 value: 2.235 - type: recall_at_100 value: 16.185 - type: recall_at_1000 value: 53.620999999999995 - type: recall_at_3 value: 0.7040000000000001 - type: recall_at_5 value: 1.172 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (sqi-eng) config: sqi-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.75 - type: precision value: 96.45 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (fry-eng) config: fry-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.54913294797689 - type: f1 value: 82.46628131021194 - type: precision value: 81.1175337186898 - type: recall value: 85.54913294797689 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kur-eng) config: kur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81.21951219512195 - type: f1 value: 77.33333333333334 - type: precision value: 75.54878048780488 - type: recall value: 81.21951219512195 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tur-eng) config: tur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.6 - type: f1 value: 98.26666666666665 - type: precision value: 98.1 - type: recall value: 98.6 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (deu-eng) config: deu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 99.5 - type: f1 value: 99.33333333333333 - type: precision value: 99.25 - type: recall value: 99.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (nld-eng) config: nld-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.8 - type: f1 value: 97.2 - type: precision value: 96.89999999999999 - type: recall value: 97.8 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ron-eng) config: ron-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.8 - type: f1 value: 97.18333333333334 - type: precision value: 96.88333333333333 - type: recall value: 97.8 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ang-eng) config: ang-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.61194029850746 - type: f1 value: 72.81094527363183 - type: precision value: 70.83333333333333 - type: recall value: 77.61194029850746 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ido-eng) config: ido-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.7 - type: f1 value: 91.91666666666667 - type: precision value: 91.08333333333334 - type: recall value: 93.7 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (jav-eng) config: jav-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.29268292682927 - type: f1 value: 85.27642276422765 - type: precision value: 84.01277584204414 - type: recall value: 88.29268292682927 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (isl-eng) config: isl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.1 - type: f1 value: 95.0 - type: precision value: 94.46666666666668 - type: recall value: 96.1 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (slv-eng) config: slv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.681652490887 - type: f1 value: 91.90765492102065 - type: precision value: 91.05913325232888 - type: recall value: 93.681652490887 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cym-eng) config: cym-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.17391304347827 - type: f1 value: 89.97101449275361 - type: precision value: 88.96811594202899 - type: recall value: 92.17391304347827 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kaz-eng) config: kaz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.43478260869566 - type: f1 value: 87.72173913043478 - type: precision value: 86.42028985507245 - type: recall value: 90.43478260869566 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (est-eng) config: est-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.4 - type: f1 value: 88.03 - type: precision value: 86.95 - type: recall value: 90.4 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (heb-eng) config: heb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.4 - type: f1 value: 91.45666666666666 - type: precision value: 90.525 - type: recall value: 93.4 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (gla-eng) config: gla-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81.9059107358263 - type: f1 value: 78.32557872364869 - type: precision value: 76.78260286824823 - type: recall value: 81.9059107358263 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (mar-eng) config: mar-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.3 - type: f1 value: 92.58333333333333 - type: precision value: 91.73333333333332 - type: recall value: 94.3 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (lat-eng) config: lat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 79.10000000000001 - type: f1 value: 74.50500000000001 - type: precision value: 72.58928571428571 - type: recall value: 79.10000000000001 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (bel-eng) config: bel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.6 - type: f1 value: 95.55 - type: precision value: 95.05 - type: recall value: 96.6 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (pms-eng) config: pms-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.0952380952381 - type: f1 value: 77.98458049886621 - type: precision value: 76.1968253968254 - type: recall value: 82.0952380952381 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (gle-eng) config: gle-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.9 - type: f1 value: 84.99190476190476 - type: precision value: 83.65 - type: recall value: 87.9 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (pes-eng) config: pes-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.7 - type: f1 value: 94.56666666666666 - type: precision value: 94.01666666666667 - type: recall value: 95.7 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (nob-eng) config: nob-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.6 - type: f1 value: 98.2 - type: precision value: 98.0 - type: recall value: 98.6 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (bul-eng) config: bul-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.6 - type: f1 value: 94.38333333333334 - type: precision value: 93.78333333333335 - type: recall value: 95.6 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cbk-eng) config: cbk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.4 - type: f1 value: 84.10380952380952 - type: precision value: 82.67 - type: recall value: 87.4 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (hun-eng) config: hun-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.5 - type: f1 value: 94.33333333333334 - type: precision value: 93.78333333333333 - type: recall value: 95.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (uig-eng) config: uig-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.4 - type: f1 value: 86.82000000000001 - type: precision value: 85.64500000000001 - type: recall value: 89.4 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (rus-eng) config: rus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.1 - type: f1 value: 93.56666666666668 - type: precision value: 92.81666666666666 - type: recall value: 95.1 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (spa-eng) config: spa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.9 - type: f1 value: 98.6 - type: precision value: 98.45 - type: recall value: 98.9 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (hye-eng) config: hye-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.01347708894879 - type: f1 value: 93.51752021563343 - type: precision value: 92.82794249775381 - type: recall value: 95.01347708894879 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tel-eng) config: tel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.00854700854701 - type: f1 value: 96.08262108262107 - type: precision value: 95.65527065527067 - type: recall value: 97.00854700854701 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (afr-eng) config: afr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.5 - type: f1 value: 95.39999999999999 - type: precision value: 94.88333333333333 - type: recall value: 96.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (mon-eng) config: mon-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.5909090909091 - type: f1 value: 95.49242424242425 - type: precision value: 94.9621212121212 - type: recall value: 96.5909090909091 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (arz-eng) config: arz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 84.90566037735849 - type: f1 value: 81.85883997204752 - type: precision value: 80.54507337526205 - type: recall value: 84.90566037735849 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (hrv-eng) config: hrv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.5 - type: f1 value: 96.75 - type: precision value: 96.38333333333333 - type: recall value: 97.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (nov-eng) config: nov-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.7704280155642 - type: f1 value: 82.99610894941635 - type: precision value: 81.32295719844358 - type: recall value: 86.7704280155642 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (gsw-eng) config: gsw-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 67.52136752136752 - type: f1 value: 61.89662189662191 - type: precision value: 59.68660968660969 - type: recall value: 67.52136752136752 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (nds-eng) config: nds-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.2 - type: f1 value: 86.32 - type: precision value: 85.015 - type: recall value: 89.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ukr-eng) config: ukr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.0 - type: f1 value: 94.78333333333333 - type: precision value: 94.18333333333334 - type: recall value: 96.0 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (uzb-eng) config: uzb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.8785046728972 - type: f1 value: 80.54517133956385 - type: precision value: 79.154984423676 - type: recall value: 83.8785046728972 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (lit-eng) config: lit-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.60000000000001 - type: f1 value: 92.01333333333334 - type: precision value: 91.28333333333333 - type: recall value: 93.60000000000001 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ina-eng) config: ina-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.1 - type: f1 value: 96.26666666666667 - type: precision value: 95.85000000000001 - type: recall value: 97.1 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (lfn-eng) config: lfn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 84.3 - type: f1 value: 80.67833333333333 - type: precision value: 79.03928571428571 - type: recall value: 84.3 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (zsm-eng) config: zsm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.3 - type: f1 value: 96.48333333333332 - type: precision value: 96.08333333333331 - type: recall value: 97.3 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ita-eng) config: ita-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.7 - type: f1 value: 94.66666666666667 - type: precision value: 94.16666666666667 - type: recall value: 95.7 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cmn-eng) config: cmn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.2 - type: f1 value: 96.36666666666667 - type: precision value: 95.96666666666668 - type: recall value: 97.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (lvs-eng) config: lvs-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.3 - type: f1 value: 92.80666666666667 - type: precision value: 92.12833333333333 - type: recall value: 94.3 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (glg-eng) config: glg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.0 - type: f1 value: 96.22333333333334 - type: precision value: 95.875 - type: recall value: 97.0 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ceb-eng) config: ceb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.33333333333333 - type: f1 value: 70.78174603174602 - type: precision value: 69.28333333333332 - type: recall value: 74.33333333333333 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (bre-eng) config: bre-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 37.6 - type: f1 value: 32.938348952090365 - type: precision value: 31.2811038961039 - type: recall value: 37.6 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ben-eng) config: ben-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.5 - type: f1 value: 89.13333333333333 - type: precision value: 88.03333333333333 - type: recall value: 91.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (swg-eng) config: swg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.14285714285714 - type: f1 value: 77.67857142857143 - type: precision value: 75.59523809523809 - type: recall value: 82.14285714285714 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (arq-eng) config: arq-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 69.0450054884742 - type: f1 value: 63.070409283362075 - type: precision value: 60.58992781824835 - type: recall value: 69.0450054884742 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kab-eng) config: kab-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.1 - type: f1 value: 57.848333333333336 - type: precision value: 55.69500000000001 - type: recall value: 63.1 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (fra-eng) config: fra-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.1 - type: f1 value: 95.01666666666667 - type: precision value: 94.5 - type: recall value: 96.1 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (por-eng) config: por-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.89999999999999 - type: f1 value: 94.90666666666667 - type: precision value: 94.425 - type: recall value: 95.89999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tat-eng) config: tat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.6 - type: f1 value: 84.61333333333333 - type: precision value: 83.27 - type: recall value: 87.6 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (oci-eng) config: oci-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.4 - type: f1 value: 71.90746031746032 - type: precision value: 70.07027777777778 - type: recall value: 76.4 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (pol-eng) config: pol-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.89999999999999 - type: f1 value: 97.26666666666667 - type: precision value: 96.95 - type: recall value: 97.89999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (war-eng) config: war-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 78.8 - type: f1 value: 74.39555555555555 - type: precision value: 72.59416666666667 - type: recall value: 78.8 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (aze-eng) config: aze-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.19999999999999 - type: f1 value: 93.78999999999999 - type: precision value: 93.125 - type: recall value: 95.19999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (vie-eng) config: vie-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.8 - type: f1 value: 97.1 - type: precision value: 96.75 - type: recall value: 97.8 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (nno-eng) config: nno-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.6 - type: f1 value: 94.25666666666666 - type: precision value: 93.64166666666668 - type: recall value: 95.6 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cha-eng) config: cha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 56.934306569343065 - type: f1 value: 51.461591936044485 - type: precision value: 49.37434827945776 - type: recall value: 56.934306569343065 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (mhr-eng) config: mhr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 20.200000000000003 - type: f1 value: 16.91799284049284 - type: precision value: 15.791855158730158 - type: recall value: 20.200000000000003 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (dan-eng) config: dan-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.2 - type: f1 value: 95.3 - type: precision value: 94.85 - type: recall value: 96.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ell-eng) config: ell-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.3 - type: f1 value: 95.11666666666667 - type: precision value: 94.53333333333333 - type: recall value: 96.3 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (amh-eng) config: amh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.88095238095238 - type: f1 value: 87.14285714285714 - type: precision value: 85.96230158730161 - type: recall value: 89.88095238095238 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (pam-eng) config: pam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 24.099999999999998 - type: f1 value: 19.630969083349783 - type: precision value: 18.275094905094907 - type: recall value: 24.099999999999998 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (hsb-eng) config: hsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.4368530020704 - type: f1 value: 79.45183870649709 - type: precision value: 77.7432712215321 - type: recall value: 83.4368530020704 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (srp-eng) config: srp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.8 - type: f1 value: 94.53333333333333 - type: precision value: 93.91666666666666 - type: recall value: 95.8 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (epo-eng) config: epo-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.8 - type: f1 value: 98.48333333333332 - type: precision value: 98.33333333333334 - type: recall value: 98.8 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kzj-eng) config: kzj-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 17.5 - type: f1 value: 14.979285714285714 - type: precision value: 14.23235060690943 - type: recall value: 17.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (awa-eng) config: awa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.93939393939394 - type: f1 value: 91.991341991342 - type: precision value: 91.05339105339105 - type: recall value: 93.93939393939394 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (fao-eng) config: fao-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.31297709923665 - type: f1 value: 86.76844783715012 - type: precision value: 85.63613231552164 - type: recall value: 89.31297709923665 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (mal-eng) config: mal-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 99.12663755458514 - type: f1 value: 98.93255701115964 - type: precision value: 98.83551673944687 - type: recall value: 99.12663755458514 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ile-eng) config: ile-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.0 - type: f1 value: 89.77999999999999 - type: precision value: 88.78333333333333 - type: recall value: 92.0 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (bos-eng) config: bos-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.89265536723164 - type: f1 value: 95.85687382297553 - type: precision value: 95.33898305084746 - type: recall value: 96.89265536723164 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cor-eng) config: cor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 14.6 - type: f1 value: 11.820611790170615 - type: precision value: 11.022616224355355 - type: recall value: 14.6 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cat-eng) config: cat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.89999999999999 - type: f1 value: 94.93333333333334 - type: precision value: 94.48666666666666 - type: recall value: 95.89999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (eus-eng) config: eus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.6 - type: f1 value: 84.72333333333334 - type: precision value: 83.44166666666666 - type: recall value: 87.6 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (yue-eng) config: yue-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.8 - type: f1 value: 93.47333333333333 - type: precision value: 92.875 - type: recall value: 94.8 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (swe-eng) config: swe-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.6 - type: f1 value: 95.71666666666665 - type: precision value: 95.28333333333335 - type: recall value: 96.6 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (dtp-eng) config: dtp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 17.8 - type: f1 value: 14.511074040901628 - type: precision value: 13.503791000666002 - type: recall value: 17.8 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kat-eng) config: kat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.10187667560321 - type: f1 value: 92.46648793565683 - type: precision value: 91.71134941912423 - type: recall value: 94.10187667560321 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (jpn-eng) config: jpn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.0 - type: f1 value: 96.11666666666666 - type: precision value: 95.68333333333334 - type: recall value: 97.0 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (csb-eng) config: csb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 72.72727272727273 - type: f1 value: 66.58949745906267 - type: precision value: 63.86693017127799 - type: recall value: 72.72727272727273 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (xho-eng) config: xho-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.14084507042254 - type: f1 value: 88.26291079812206 - type: precision value: 87.32394366197182 - type: recall value: 90.14084507042254 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (orv-eng) config: orv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 64.67065868263472 - type: f1 value: 58.2876627696987 - type: precision value: 55.79255774165953 - type: recall value: 64.67065868263472 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ind-eng) config: ind-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.6 - type: f1 value: 94.41666666666667 - type: precision value: 93.85 - type: recall value: 95.6 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tuk-eng) config: tuk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 55.172413793103445 - type: f1 value: 49.63992493549144 - type: precision value: 47.71405113769646 - type: recall value: 55.172413793103445 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (max-eng) config: max-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.46478873239437 - type: f1 value: 73.4417616811983 - type: precision value: 71.91607981220658 - type: recall value: 77.46478873239437 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (swh-eng) config: swh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 84.61538461538461 - type: f1 value: 80.91452991452994 - type: precision value: 79.33760683760683 - type: recall value: 84.61538461538461 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (hin-eng) config: hin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.2 - type: f1 value: 97.6 - type: precision value: 97.3 - type: recall value: 98.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (dsb-eng) config: dsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 75.5741127348643 - type: f1 value: 72.00417536534445 - type: precision value: 70.53467872883321 - type: recall value: 75.5741127348643 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ber-eng) config: ber-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 62.2 - type: f1 value: 55.577460317460314 - type: precision value: 52.98583333333333 - type: recall value: 62.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tam-eng) config: tam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.18241042345277 - type: f1 value: 90.6468124709167 - type: precision value: 89.95656894679696 - type: recall value: 92.18241042345277 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (slk-eng) config: slk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.1 - type: f1 value: 95.13333333333333 - type: precision value: 94.66666666666667 - type: recall value: 96.1 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tgl-eng) config: tgl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.8 - type: f1 value: 95.85000000000001 - type: precision value: 95.39999999999999 - type: recall value: 96.8 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ast-eng) config: ast-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.1259842519685 - type: f1 value: 89.76377952755905 - type: precision value: 88.71391076115485 - type: recall value: 92.1259842519685 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (mkd-eng) config: mkd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.1 - type: f1 value: 92.49 - type: precision value: 91.725 - type: recall value: 94.1 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (khm-eng) config: khm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.5623268698061 - type: f1 value: 73.27364463791058 - type: precision value: 71.51947852086357 - type: recall value: 77.5623268698061 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ces-eng) config: ces-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.56666666666666 - type: precision value: 96.16666666666667 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tzl-eng) config: tzl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 66.34615384615384 - type: f1 value: 61.092032967032964 - type: precision value: 59.27197802197802 - type: recall value: 66.34615384615384 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (urd-eng) config: urd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.89999999999999 - type: f1 value: 93.41190476190476 - type: precision value: 92.7 - type: recall value: 94.89999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ara-eng) config: ara-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.10000000000001 - type: f1 value: 91.10000000000001 - type: precision value: 90.13333333333333 - type: recall value: 93.10000000000001 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kor-eng) config: kor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.7 - type: f1 value: 91.97333333333334 - type: precision value: 91.14166666666667 - type: recall value: 93.7 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (yid-eng) config: yid-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.21698113207547 - type: f1 value: 90.3796046720575 - type: precision value: 89.56367924528303 - type: recall value: 92.21698113207547 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (fin-eng) config: fin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.6 - type: f1 value: 96.91666666666667 - type: precision value: 96.6 - type: recall value: 97.6 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tha-eng) config: tha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.44525547445255 - type: f1 value: 96.71532846715328 - type: precision value: 96.35036496350365 - type: recall value: 97.44525547445255 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (wuu-eng) config: wuu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.1 - type: f1 value: 92.34000000000002 - type: precision value: 91.49166666666667 - type: recall value: 94.1 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 3.2910000000000004 - type: map_at_10 value: 10.373000000000001 - type: map_at_100 value: 15.612 - type: map_at_1000 value: 17.06 - type: map_at_3 value: 6.119 - type: map_at_5 value: 7.917000000000001 - type: mrr_at_1 value: 44.897999999999996 - type: mrr_at_10 value: 56.054 - type: mrr_at_100 value: 56.82000000000001 - type: mrr_at_1000 value: 56.82000000000001 - type: mrr_at_3 value: 52.381 - type: mrr_at_5 value: 53.81 - type: ndcg_at_1 value: 42.857 - type: ndcg_at_10 value: 27.249000000000002 - type: ndcg_at_100 value: 36.529 - type: ndcg_at_1000 value: 48.136 - type: ndcg_at_3 value: 33.938 - type: ndcg_at_5 value: 29.951 - type: precision_at_1 value: 44.897999999999996 - type: precision_at_10 value: 22.653000000000002 - type: precision_at_100 value: 7.000000000000001 - type: precision_at_1000 value: 1.48 - type: precision_at_3 value: 32.653 - type: precision_at_5 value: 27.755000000000003 - type: recall_at_1 value: 3.2910000000000004 - type: recall_at_10 value: 16.16 - type: recall_at_100 value: 43.908 - type: recall_at_1000 value: 79.823 - type: recall_at_3 value: 7.156 - type: recall_at_5 value: 10.204 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.05879999999999 - type: ap value: 14.609748142799111 - type: f1 value: 54.878956295843096 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 64.61799660441426 - type: f1 value: 64.8698191961434 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 51.32860036611885 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 88.34714192048638 - type: cos_sim_ap value: 80.26732975975634 - type: cos_sim_f1 value: 73.53415148134374 - type: cos_sim_precision value: 69.34767360299276 - type: cos_sim_recall value: 78.25857519788919 - type: dot_accuracy value: 88.34714192048638 - type: dot_ap value: 80.26733698491206 - type: dot_f1 value: 73.53415148134374 - type: dot_precision value: 69.34767360299276 - type: dot_recall value: 78.25857519788919 - type: euclidean_accuracy value: 88.34714192048638 - type: euclidean_ap value: 80.26734337771738 - type: euclidean_f1 value: 73.53415148134374 - type: euclidean_precision value: 69.34767360299276 - type: euclidean_recall value: 78.25857519788919 - type: manhattan_accuracy value: 88.30541813196639 - type: manhattan_ap value: 80.19415808104145 - type: manhattan_f1 value: 73.55143870713441 - type: manhattan_precision value: 73.25307511122743 - type: manhattan_recall value: 73.85224274406332 - type: max_accuracy value: 88.34714192048638 - type: max_ap value: 80.26734337771738 - type: max_f1 value: 73.55143870713441 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.81061047075717 - type: cos_sim_ap value: 87.11747055081017 - type: cos_sim_f1 value: 80.04355498817256 - type: cos_sim_precision value: 78.1165262000733 - type: cos_sim_recall value: 82.06806282722513 - type: dot_accuracy value: 89.81061047075717 - type: dot_ap value: 87.11746902745236 - type: dot_f1 value: 80.04355498817256 - type: dot_precision value: 78.1165262000733 - type: dot_recall value: 82.06806282722513 - type: euclidean_accuracy value: 89.81061047075717 - type: euclidean_ap value: 87.11746919324248 - type: euclidean_f1 value: 80.04355498817256 - type: euclidean_precision value: 78.1165262000733 - type: euclidean_recall value: 82.06806282722513 - type: manhattan_accuracy value: 89.79508673885202 - type: manhattan_ap value: 87.11074390832218 - type: manhattan_f1 value: 80.13002540726349 - type: manhattan_precision value: 77.83826945412311 - type: manhattan_recall value: 82.56082537727133 - type: max_accuracy value: 89.81061047075717 - type: max_ap value: 87.11747055081017 - type: max_f1 value: 80.13002540726349 language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: mit --- ## Multilingual-E5-large-instruct [Multilingual E5 Text Embeddings: A Technical Report](https://arxiv.org/pdf/2402.05672). Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024 This model has 24 layers and the embedding size is 1024. ## Usage Below are examples to encode queries and passages from the MS-MARCO passage ranking dataset. ### Transformers ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] def get_detailed_instruct(task_description: str, query: str) -> str: return f'Instruct: {task_description}\nQuery: {query}' # Each query must come with a one-sentence instruction that describes the task task = 'Given a web search query, retrieve relevant passages that answer the query' queries = [ get_detailed_instruct(task, 'how much protein should a female eat'), get_detailed_instruct(task, 'ๅ—็“œ็š„ๅฎถๅธธๅšๆณ•') ] # No need to add instruction for retrieval documents documents = [ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "1.ๆธ…็‚’ๅ—็“œไธ ๅŽŸๆ–™:ๅซฉๅ—็“œๅŠไธช ่ฐƒๆ–™:่‘ฑใ€็›ใ€็™ฝ็ณ–ใ€้ธก็ฒพ ๅšๆณ•: 1ใ€ๅ—็“œ็”จๅˆ€่–„่–„็š„ๅ‰ŠๅŽป่กจ้ขไธ€ๅฑ‚็šฎ,็”จๅ‹บๅญๅˆฎๅŽป็“ค 2ใ€ๆ“ฆๆˆ็ป†ไธ(ๆฒกๆœ‰ๆ“ฆ่œๆฟๅฐฑ็”จๅˆ€ๆ…ขๆ…ขๅˆ‡ๆˆ็ป†ไธ) 3ใ€้”…็ƒง็ƒญๆ”พๆฒน,ๅ…ฅ่‘ฑ่Šฑ็…ธๅ‡บ้ฆ™ๅ‘ณ 4ใ€ๅ…ฅๅ—็“œไธๅฟซ้€Ÿ็ฟป็‚’ไธ€ๅˆ†้’Ÿๅทฆๅณ,ๆ”พ็›ใ€ไธ€็‚น็™ฝ็ณ–ๅ’Œ้ธก็ฒพ่ฐƒๅ‘ณๅ‡บ้”… 2.้ฆ™่‘ฑ็‚’ๅ—็“œ ๅŽŸๆ–™:ๅ—็“œ1ๅช ่ฐƒๆ–™:้ฆ™่‘ฑใ€่’œๆœซใ€ๆฉ„ๆฆ„ๆฒนใ€็› ๅšๆณ•: 1ใ€ๅฐ†ๅ—็“œๅŽป็šฎ,ๅˆ‡ๆˆ็‰‡ 2ใ€ๆฒน้”…8ๆˆ็ƒญๅŽ,ๅฐ†่’œๆœซๆ”พๅ…ฅ็ˆ†้ฆ™ 3ใ€็ˆ†้ฆ™ๅŽ,ๅฐ†ๅ—็“œ็‰‡ๆ”พๅ…ฅ,็ฟป็‚’ 4ใ€ๅœจ็ฟป็‚’็š„ๅŒๆ—ถ,ๅฏไปฅไธๆ—ถๅœฐๅพ€้”…้‡ŒๅŠ ๆฐด,ไฝ†ไธ่ฆๅคชๅคš 5ใ€ๆ”พๅ…ฅ็›,็‚’ๅŒ€ 6ใ€ๅ—็“œๅทฎไธๅคš่ฝฏๅ’Œ็ปตไบ†ไน‹ๅŽ,ๅฐฑๅฏไปฅๅ…ณ็ซ 7ใ€ๆ’’ๅ…ฅ้ฆ™่‘ฑ,ๅณๅฏๅ‡บ้”…" ] input_texts = queries + documents tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-large-instruct') model = AutoModel.from_pretrained('intfloat/multilingual-e5-large-instruct') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) # => [[91.92852783203125, 67.580322265625], [70.3814468383789, 92.1330795288086]] ``` ### Sentence Transformers ```python from sentence_transformers import SentenceTransformer def get_detailed_instruct(task_description: str, query: str) -> str: return f'Instruct: {task_description}\nQuery: {query}' # Each query must come with a one-sentence instruction that describes the task task = 'Given a web search query, retrieve relevant passages that answer the query' queries = [ get_detailed_instruct(task, 'how much protein should a female eat'), get_detailed_instruct(task, 'ๅ—็“œ็š„ๅฎถๅธธๅšๆณ•') ] # No need to add instruction for retrieval documents documents = [ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "1.ๆธ…็‚’ๅ—็“œไธ ๅŽŸๆ–™:ๅซฉๅ—็“œๅŠไธช ่ฐƒๆ–™:่‘ฑใ€็›ใ€็™ฝ็ณ–ใ€้ธก็ฒพ ๅšๆณ•: 1ใ€ๅ—็“œ็”จๅˆ€่–„่–„็š„ๅ‰ŠๅŽป่กจ้ขไธ€ๅฑ‚็šฎ,็”จๅ‹บๅญๅˆฎๅŽป็“ค 2ใ€ๆ“ฆๆˆ็ป†ไธ(ๆฒกๆœ‰ๆ“ฆ่œๆฟๅฐฑ็”จๅˆ€ๆ…ขๆ…ขๅˆ‡ๆˆ็ป†ไธ) 3ใ€้”…็ƒง็ƒญๆ”พๆฒน,ๅ…ฅ่‘ฑ่Šฑ็…ธๅ‡บ้ฆ™ๅ‘ณ 4ใ€ๅ…ฅๅ—็“œไธๅฟซ้€Ÿ็ฟป็‚’ไธ€ๅˆ†้’Ÿๅทฆๅณ,ๆ”พ็›ใ€ไธ€็‚น็™ฝ็ณ–ๅ’Œ้ธก็ฒพ่ฐƒๅ‘ณๅ‡บ้”… 2.้ฆ™่‘ฑ็‚’ๅ—็“œ ๅŽŸๆ–™:ๅ—็“œ1ๅช ่ฐƒๆ–™:้ฆ™่‘ฑใ€่’œๆœซใ€ๆฉ„ๆฆ„ๆฒนใ€็› ๅšๆณ•: 1ใ€ๅฐ†ๅ—็“œๅŽป็šฎ,ๅˆ‡ๆˆ็‰‡ 2ใ€ๆฒน้”…8ๆˆ็ƒญๅŽ,ๅฐ†่’œๆœซๆ”พๅ…ฅ็ˆ†้ฆ™ 3ใ€็ˆ†้ฆ™ๅŽ,ๅฐ†ๅ—็“œ็‰‡ๆ”พๅ…ฅ,็ฟป็‚’ 4ใ€ๅœจ็ฟป็‚’็š„ๅŒๆ—ถ,ๅฏไปฅไธๆ—ถๅœฐๅพ€้”…้‡ŒๅŠ ๆฐด,ไฝ†ไธ่ฆๅคชๅคš 5ใ€ๆ”พๅ…ฅ็›,็‚’ๅŒ€ 6ใ€ๅ—็“œๅทฎไธๅคš่ฝฏๅ’Œ็ปตไบ†ไน‹ๅŽ,ๅฐฑๅฏไปฅๅ…ณ็ซ 7ใ€ๆ’’ๅ…ฅ้ฆ™่‘ฑ,ๅณๅฏๅ‡บ้”…" ] input_texts = queries + documents model = SentenceTransformer('intfloat/multilingual-e5-large-instruct') embeddings = model.encode(input_texts, convert_to_tensor=True, normalize_embeddings=True) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) # [[91.92853546142578, 67.5802993774414], [70.38143157958984, 92.13307189941406]] ``` ### Infinity Usage with [Infinity](https://github.com/michaelfeil/infinity): ```bash docker run --gpus all -v $PWD/data:/app/.cache -e HF_TOKEN=$HF_TOKEN -p "7997":"7997" \ michaelf34/infinity:0.0.68 \ v2 --model-id intfloat/multilingual-e5-large-instruct --revision "main" --dtype float16 --batch-size 32 -engine torch --port 7997 ``` ## Supported Languages This model is initialized from [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) and continually trained on a mixture of multilingual datasets. It supports 100 languages from xlm-roberta, but low-resource languages may see performance degradation. ## Training Details **Initialization**: [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) **First stage**: contrastive pre-training with 1 billion weakly supervised text pairs. **Second stage**: fine-tuning on datasets from the [E5-mistral](https://arxiv.org/abs/2401.00368) paper. ## MTEB Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## FAQ **1. Do I need to add instructions to the query?** Yes, this is how the model is trained, otherwise you will see a performance degradation. The task definition should be a one-sentence instruction that describes the task. This is a way to customize text embeddings for different scenarios through natural language instructions. Please check out [unilm/e5/utils.py](https://github.com/microsoft/unilm/blob/9c0f1ff7ca53431fe47d2637dfe253643d94185b/e5/utils.py#L106) for instructions we used for evaluation. On the other hand, there is no need to add instructions to the document side. **2. Why are my reproduced results slightly different from reported in the model card?** Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences. **3. Why does the cosine similarity scores distribute around 0.7 to 1.0?** This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss. For text embedding tasks like text retrieval or semantic similarity, what matters is the relative order of the scores instead of the absolute values, so this should not be an issue. ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2024multilingual, title={Multilingual E5 Text Embeddings: A Technical Report}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2402.05672}, year={2024} } ``` ## Limitations Long texts will be truncated to at most 512 tokens.
featherless-ai-quants/rinna-llama-3-youko-70b-instruct-GGUF
featherless-ai-quants
2025-06-03T17:43:05Z
0
0
null
[ "gguf", "text-generation", "base_model:rinna/llama-3-youko-70b-instruct", "base_model:quantized:rinna/llama-3-youko-70b-instruct", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-06-03T16:28:46Z
--- base_model: rinna/llama-3-youko-70b-instruct pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # rinna/llama-3-youko-70b-instruct GGUF Quantizations ๐Ÿš€ ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* > Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations ๐Ÿ“Š | Quantization Type | File | Size | |-------------------|------|------| | IQ4_XS | [rinna-llama-3-youko-70b-instruct-IQ4_XS](https://huggingface.co/featherless-ai-quants/rinna-llama-3-youko-70b-instruct-GGUF/tree/main/rinna-llama-3-youko-70b-instruct-IQ4_XS) | 36496.80 MB (folder) | | Q2_K | [rinna-llama-3-youko-70b-instruct-Q2_K](https://huggingface.co/featherless-ai-quants/rinna-llama-3-youko-70b-instruct-GGUF/tree/main/rinna-llama-3-youko-70b-instruct-Q2_K) | 25153.26 MB (folder) | | Q3_K_L | [rinna-llama-3-youko-70b-instruct-Q3_K_L](https://huggingface.co/featherless-ai-quants/rinna-llama-3-youko-70b-instruct-GGUF/tree/main/rinna-llama-3-youko-70b-instruct-Q3_K_L) | 35420.03 MB (folder) | | Q3_K_M | [rinna-llama-3-youko-70b-instruct-Q3_K_M](https://huggingface.co/featherless-ai-quants/rinna-llama-3-youko-70b-instruct-GGUF/tree/main/rinna-llama-3-youko-70b-instruct-Q3_K_M) | 32680.03 MB (folder) | | Q3_K_S | [rinna-llama-3-youko-70b-instruct-Q3_K_S](https://huggingface.co/featherless-ai-quants/rinna-llama-3-youko-70b-instruct-GGUF/tree/main/rinna-llama-3-youko-70b-instruct-Q3_K_S) | 29480.03 MB (folder) | | Q4_K_M | [rinna-llama-3-youko-70b-instruct-Q4_K_M](https://huggingface.co/featherless-ai-quants/rinna-llama-3-youko-70b-instruct-GGUF/tree/main/rinna-llama-3-youko-70b-instruct-Q4_K_M) | 40550.61 MB (folder) | | Q4_K_S | [rinna-llama-3-youko-70b-instruct-Q4_K_S](https://huggingface.co/featherless-ai-quants/rinna-llama-3-youko-70b-instruct-GGUF/tree/main/rinna-llama-3-youko-70b-instruct-Q4_K_S) | 38478.11 MB (folder) | | Q5_K_M | [rinna-llama-3-youko-70b-instruct-Q5_K_M](https://huggingface.co/featherless-ai-quants/rinna-llama-3-youko-70b-instruct-GGUF/tree/main/rinna-llama-3-youko-70b-instruct-Q5_K_M) | 47635.86 MB (folder) | | Q5_K_S | [rinna-llama-3-youko-70b-instruct-Q5_K_S](https://huggingface.co/featherless-ai-quants/rinna-llama-3-youko-70b-instruct-GGUF/tree/main/rinna-llama-3-youko-70b-instruct-Q5_K_S) | 46403.36 MB (folder) | | Q6_K | [rinna-llama-3-youko-70b-instruct-Q6_K](https://huggingface.co/featherless-ai-quants/rinna-llama-3-youko-70b-instruct-GGUF/tree/main/rinna-llama-3-youko-70b-instruct-Q6_K) | 55206.44 MB (folder) | | Q8_0 | [rinna-llama-3-youko-70b-instruct-Q8_0](https://huggingface.co/featherless-ai-quants/rinna-llama-3-youko-70b-instruct-GGUF/tree/main/rinna-llama-3-youko-70b-instruct-Q8_0) | 71501.78 MB (folder) | --- ## โšก Powered by [Featherless AI](https://featherless.ai) ### Key Features - ๐Ÿ”ฅ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - ๐Ÿ› ๏ธ **Zero Infrastructure** - No server setup or maintenance required - ๐Ÿ“š **Vast Compatibility** - Support for 2400+ models and counting - ๐Ÿ’Ž **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
MaIlz/outputs_grpo_improved2_2
MaIlz
2025-06-03T17:42:03Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "unsloth", "trl", "grpo", "arxiv:2402.03300", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit", "endpoints_compatible", "region:us" ]
null
2025-06-03T17:41:56Z
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit library_name: transformers model_name: outputs_grpo_improved2_2 tags: - generated_from_trainer - unsloth - trl - grpo licence: license --- # Model Card for outputs_grpo_improved2_2 This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="MaIlz/outputs_grpo_improved2_2", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ArliAI/RpR-v4-Fast-30B-A3B
ArliAI
2025-06-03T17:40:15Z
0
0
null
[ "safetensors", "qwen3_moe", "license:apache-2.0", "region:us" ]
null
2025-06-03T17:14:23Z
--- license: apache-2.0 ---
ClaMncDexter/gemma-3-4b-it-unsloth-bnb-4bit-float16
ClaMncDexter
2025-06-03T17:39:49Z
17
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-05-30T17:56:47Z
--- base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** ClaMncDexter - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
jaumebalust/PPO-LunarLander-v2
jaumebalust
2025-06-03T17:36:48Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-06-03T17:36:30Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 272.56 +/- 18.59 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
sensurs/Full.Video.Cikgu.Fadhilah.Video.CCTV.Wiring.video
sensurs
2025-06-03T17:36:24Z
0
0
null
[ "region:us" ]
null
2025-06-03T17:36:18Z
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a> <a href="https://sdu.sk/uLf" rel="nofollow">โ–บโœ… ๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ (๐—ฆ๐—ถ๐—ด๐—ป ๐—จ๐—ฝ ๐˜๐—ผ ๐™๐™ช๐™ก๐™ก ๐—ช๐—ฎ๐˜๐—ฐ๐—ต ๐™‘๐™ž๐™™๐™š๐™คโค๏ธโค๏ธ)</a> <a href="https://sdu.sk/uLf" rel="nofollow">๐Ÿ”ด โžคโ–บโœ…๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐ฅ๐ข๐ง๐ค)</a>
mradermacher/Olmoe-0.5B-6B-GGUF
mradermacher
2025-06-03T17:36:07Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:reflex-ai/Olmoe-0.5B-6B", "base_model:quantized:reflex-ai/Olmoe-0.5B-6B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-03T09:31:58Z
--- base_model: reflex-ai/Olmoe-0.5B-6B language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/reflex-ai/Olmoe-0.5B-6B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Olmoe-0.5B-6B-GGUF/resolve/main/Olmoe-0.5B-6B.Q2_K.gguf) | Q2_K | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Olmoe-0.5B-6B-GGUF/resolve/main/Olmoe-0.5B-6B.Q3_K_S.gguf) | Q3_K_S | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Olmoe-0.5B-6B-GGUF/resolve/main/Olmoe-0.5B-6B.Q3_K_M.gguf) | Q3_K_M | 4.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Olmoe-0.5B-6B-GGUF/resolve/main/Olmoe-0.5B-6B.Q3_K_L.gguf) | Q3_K_L | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/Olmoe-0.5B-6B-GGUF/resolve/main/Olmoe-0.5B-6B.IQ4_XS.gguf) | IQ4_XS | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Olmoe-0.5B-6B-GGUF/resolve/main/Olmoe-0.5B-6B.Q4_K_S.gguf) | Q4_K_S | 5.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Olmoe-0.5B-6B-GGUF/resolve/main/Olmoe-0.5B-6B.Q4_K_M.gguf) | Q4_K_M | 5.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Olmoe-0.5B-6B-GGUF/resolve/main/Olmoe-0.5B-6B.Q5_K_S.gguf) | Q5_K_S | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/Olmoe-0.5B-6B-GGUF/resolve/main/Olmoe-0.5B-6B.Q5_K_M.gguf) | Q5_K_M | 6.4 | | | [GGUF](https://huggingface.co/mradermacher/Olmoe-0.5B-6B-GGUF/resolve/main/Olmoe-0.5B-6B.Q6_K.gguf) | Q6_K | 7.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Olmoe-0.5B-6B-GGUF/resolve/main/Olmoe-0.5B-6B.Q8_0.gguf) | Q8_0 | 9.5 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Olmoe-0.5B-6B-GGUF/resolve/main/Olmoe-0.5B-6B.f16.gguf) | f16 | 17.8 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
OrcaDB/qwen2-1.5b
OrcaDB
2025-06-03T17:34:15Z
35,083
0
sentence-transformers
[ "sentence-transformers", "safetensors", "qwen2", "feature-extraction", "mteb", "transformers", "Qwen2", "sentence-similarity", "custom_code", "arxiv:2308.03281", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-04-25T19:40:33Z
--- tags: - mteb - sentence-transformers - transformers - Qwen2 - sentence-similarity license: apache-2.0 model-index: - name: gte-qwen2-7B-instruct results: - dataset: config: en name: MTEB AmazonCounterfactualClassification (en) revision: e8379541af4e31359cca9fbcf4b00f2671dba205 split: test type: mteb/amazon_counterfactual metrics: - type: accuracy value: 83.98507462686567 - type: ap value: 50.93015252587014 - type: f1 value: 78.50416599051215 task: type: Classification - dataset: config: default name: MTEB AmazonPolarityClassification revision: e2d317d38cd51312af73b3d32a06d1a08b442046 split: test type: mteb/amazon_polarity metrics: - type: accuracy value: 96.61065 - type: ap value: 94.89174052954196 - type: f1 value: 96.60942596940565 task: type: Classification - dataset: config: en name: MTEB AmazonReviewsClassification (en) revision: 1399c76144fd37290681b995c656ef9b2e06e26d split: test type: mteb/amazon_reviews_multi metrics: - type: accuracy value: 55.614000000000004 - type: f1 value: 54.90553480294904 task: type: Classification - dataset: config: default name: MTEB ArguAna revision: c22ab2a51041ffd869aaddef7af8d8215647e41a split: test type: mteb/arguana metrics: - type: map_at_1 value: 45.164 - type: map_at_10 value: 61.519 - type: map_at_100 value: 61.769 - type: map_at_1000 value: 61.769 - type: map_at_3 value: 57.443999999999996 - type: map_at_5 value: 60.058 - type: mrr_at_1 value: 46.088 - type: mrr_at_10 value: 61.861 - type: mrr_at_100 value: 62.117999999999995 - type: mrr_at_1000 value: 62.117999999999995 - type: mrr_at_3 value: 57.729 - type: mrr_at_5 value: 60.392 - type: ndcg_at_1 value: 45.164 - type: ndcg_at_10 value: 69.72 - type: ndcg_at_100 value: 70.719 - type: ndcg_at_1000 value: 70.719 - type: ndcg_at_3 value: 61.517999999999994 - type: ndcg_at_5 value: 66.247 - type: precision_at_1 value: 45.164 - type: precision_at_10 value: 9.545 - type: precision_at_100 value: 0.996 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 24.443 - type: precision_at_5 value: 16.97 - type: recall_at_1 value: 45.164 - type: recall_at_10 value: 95.448 - type: recall_at_100 value: 99.644 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 73.329 - type: recall_at_5 value: 84.851 task: type: Retrieval - dataset: config: default name: MTEB ArxivClusteringP2P revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d split: test type: mteb/arxiv-clustering-p2p metrics: - type: v_measure value: 50.511868162026175 task: type: Clustering - dataset: config: default name: MTEB ArxivClusteringS2S revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 split: test type: mteb/arxiv-clustering-s2s metrics: - type: v_measure value: 45.007803189284004 task: type: Clustering - dataset: config: default name: MTEB AskUbuntuDupQuestions revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 split: test type: mteb/askubuntudupquestions-reranking metrics: - type: map value: 64.55292107723382 - type: mrr value: 77.66158818097877 task: type: Reranking - dataset: config: default name: MTEB BIOSSES revision: d3fb88f8f02e40887cd149695127462bbcf29b4a split: test type: mteb/biosses-sts metrics: - type: cos_sim_pearson value: 85.65459047085452 - type: cos_sim_spearman value: 82.10729255710761 - type: euclidean_pearson value: 82.78079159312476 - type: euclidean_spearman value: 80.50002701880933 - type: manhattan_pearson value: 82.41372641383016 - type: manhattan_spearman value: 80.57412509272639 task: type: STS - dataset: config: default name: MTEB Banking77Classification revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 split: test type: mteb/banking77 metrics: - type: accuracy value: 87.30844155844156 - type: f1 value: 87.25307322443255 task: type: Classification - dataset: config: default name: MTEB BiorxivClusteringP2P revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 split: test type: mteb/biorxiv-clustering-p2p metrics: - type: v_measure value: 43.20754608934859 task: type: Clustering - dataset: config: default name: MTEB BiorxivClusteringS2S revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 split: test type: mteb/biorxiv-clustering-s2s metrics: - type: v_measure value: 38.818037697335505 task: type: Clustering - dataset: config: default name: MTEB CQADupstackAndroidRetrieval revision: f46a197baaae43b4f621051089b82a364682dfeb split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 35.423 - type: map_at_10 value: 47.198 - type: map_at_100 value: 48.899 - type: map_at_1000 value: 49.004 - type: map_at_3 value: 43.114999999999995 - type: map_at_5 value: 45.491 - type: mrr_at_1 value: 42.918 - type: mrr_at_10 value: 53.299 - type: mrr_at_100 value: 54.032000000000004 - type: mrr_at_1000 value: 54.055 - type: mrr_at_3 value: 50.453 - type: mrr_at_5 value: 52.205999999999996 - type: ndcg_at_1 value: 42.918 - type: ndcg_at_10 value: 53.98 - type: ndcg_at_100 value: 59.57 - type: ndcg_at_1000 value: 60.879000000000005 - type: ndcg_at_3 value: 48.224000000000004 - type: ndcg_at_5 value: 50.998 - type: precision_at_1 value: 42.918 - type: precision_at_10 value: 10.299999999999999 - type: precision_at_100 value: 1.687 - type: precision_at_1000 value: 0.211 - type: precision_at_3 value: 22.842000000000002 - type: precision_at_5 value: 16.681 - type: recall_at_1 value: 35.423 - type: recall_at_10 value: 66.824 - type: recall_at_100 value: 89.564 - type: recall_at_1000 value: 97.501 - type: recall_at_3 value: 50.365 - type: recall_at_5 value: 57.921 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackEnglishRetrieval revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 33.205 - type: map_at_10 value: 44.859 - type: map_at_100 value: 46.135 - type: map_at_1000 value: 46.259 - type: map_at_3 value: 41.839 - type: map_at_5 value: 43.662 - type: mrr_at_1 value: 41.146 - type: mrr_at_10 value: 50.621 - type: mrr_at_100 value: 51.207 - type: mrr_at_1000 value: 51.246 - type: mrr_at_3 value: 48.535000000000004 - type: mrr_at_5 value: 49.818 - type: ndcg_at_1 value: 41.146 - type: ndcg_at_10 value: 50.683 - type: ndcg_at_100 value: 54.82 - type: ndcg_at_1000 value: 56.69 - type: ndcg_at_3 value: 46.611000000000004 - type: ndcg_at_5 value: 48.66 - type: precision_at_1 value: 41.146 - type: precision_at_10 value: 9.439 - type: precision_at_100 value: 1.465 - type: precision_at_1000 value: 0.194 - type: precision_at_3 value: 22.59 - type: precision_at_5 value: 15.86 - type: recall_at_1 value: 33.205 - type: recall_at_10 value: 61.028999999999996 - type: recall_at_100 value: 78.152 - type: recall_at_1000 value: 89.59700000000001 - type: recall_at_3 value: 49.05 - type: recall_at_5 value: 54.836 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackGamingRetrieval revision: 4885aa143210c98657558c04aaf3dc47cfb54340 split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 41.637 - type: map_at_10 value: 55.162 - type: map_at_100 value: 56.142 - type: map_at_1000 value: 56.188 - type: map_at_3 value: 51.564 - type: map_at_5 value: 53.696 - type: mrr_at_1 value: 47.524 - type: mrr_at_10 value: 58.243 - type: mrr_at_100 value: 58.879999999999995 - type: mrr_at_1000 value: 58.9 - type: mrr_at_3 value: 55.69499999999999 - type: mrr_at_5 value: 57.284 - type: ndcg_at_1 value: 47.524 - type: ndcg_at_10 value: 61.305 - type: ndcg_at_100 value: 65.077 - type: ndcg_at_1000 value: 65.941 - type: ndcg_at_3 value: 55.422000000000004 - type: ndcg_at_5 value: 58.516 - type: precision_at_1 value: 47.524 - type: precision_at_10 value: 9.918000000000001 - type: precision_at_100 value: 1.276 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 24.765 - type: precision_at_5 value: 17.204 - type: recall_at_1 value: 41.637 - type: recall_at_10 value: 76.185 - type: recall_at_100 value: 92.149 - type: recall_at_1000 value: 98.199 - type: recall_at_3 value: 60.856 - type: recall_at_5 value: 68.25099999999999 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackGisRetrieval revision: 5003b3064772da1887988e05400cf3806fe491f2 split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 26.27 - type: map_at_10 value: 37.463 - type: map_at_100 value: 38.434000000000005 - type: map_at_1000 value: 38.509 - type: map_at_3 value: 34.226 - type: map_at_5 value: 36.161 - type: mrr_at_1 value: 28.588 - type: mrr_at_10 value: 39.383 - type: mrr_at_100 value: 40.23 - type: mrr_at_1000 value: 40.281 - type: mrr_at_3 value: 36.422 - type: mrr_at_5 value: 38.252 - type: ndcg_at_1 value: 28.588 - type: ndcg_at_10 value: 43.511 - type: ndcg_at_100 value: 48.274 - type: ndcg_at_1000 value: 49.975 - type: ndcg_at_3 value: 37.319 - type: ndcg_at_5 value: 40.568 - type: precision_at_1 value: 28.588 - type: precision_at_10 value: 6.893000000000001 - type: precision_at_100 value: 0.9900000000000001 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 16.347 - type: precision_at_5 value: 11.661000000000001 - type: recall_at_1 value: 26.27 - type: recall_at_10 value: 60.284000000000006 - type: recall_at_100 value: 81.902 - type: recall_at_1000 value: 94.43 - type: recall_at_3 value: 43.537 - type: recall_at_5 value: 51.475 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackMathematicaRetrieval revision: 90fceea13679c63fe563ded68f3b6f06e50061de split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 18.168 - type: map_at_10 value: 28.410000000000004 - type: map_at_100 value: 29.78 - type: map_at_1000 value: 29.892999999999997 - type: map_at_3 value: 25.238 - type: map_at_5 value: 26.96 - type: mrr_at_1 value: 23.507 - type: mrr_at_10 value: 33.382 - type: mrr_at_100 value: 34.404 - type: mrr_at_1000 value: 34.467999999999996 - type: mrr_at_3 value: 30.637999999999998 - type: mrr_at_5 value: 32.199 - type: ndcg_at_1 value: 23.507 - type: ndcg_at_10 value: 34.571000000000005 - type: ndcg_at_100 value: 40.663 - type: ndcg_at_1000 value: 43.236000000000004 - type: ndcg_at_3 value: 29.053 - type: ndcg_at_5 value: 31.563999999999997 - type: precision_at_1 value: 23.507 - type: precision_at_10 value: 6.654 - type: precision_at_100 value: 1.113 - type: precision_at_1000 value: 0.146 - type: precision_at_3 value: 14.427999999999999 - type: precision_at_5 value: 10.498000000000001 - type: recall_at_1 value: 18.168 - type: recall_at_10 value: 48.443000000000005 - type: recall_at_100 value: 74.47 - type: recall_at_1000 value: 92.494 - type: recall_at_3 value: 33.379999999999995 - type: recall_at_5 value: 39.76 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackPhysicsRetrieval revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 32.39 - type: map_at_10 value: 44.479 - type: map_at_100 value: 45.977000000000004 - type: map_at_1000 value: 46.087 - type: map_at_3 value: 40.976 - type: map_at_5 value: 43.038 - type: mrr_at_1 value: 40.135 - type: mrr_at_10 value: 50.160000000000004 - type: mrr_at_100 value: 51.052 - type: mrr_at_1000 value: 51.087 - type: mrr_at_3 value: 47.818 - type: mrr_at_5 value: 49.171 - type: ndcg_at_1 value: 40.135 - type: ndcg_at_10 value: 50.731 - type: ndcg_at_100 value: 56.452000000000005 - type: ndcg_at_1000 value: 58.123000000000005 - type: ndcg_at_3 value: 45.507 - type: ndcg_at_5 value: 48.11 - type: precision_at_1 value: 40.135 - type: precision_at_10 value: 9.192 - type: precision_at_100 value: 1.397 - type: precision_at_1000 value: 0.169 - type: precision_at_3 value: 21.816 - type: precision_at_5 value: 15.476 - type: recall_at_1 value: 32.39 - type: recall_at_10 value: 63.597 - type: recall_at_100 value: 86.737 - type: recall_at_1000 value: 97.039 - type: recall_at_3 value: 48.906 - type: recall_at_5 value: 55.659000000000006 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackProgrammersRetrieval revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 28.397 - type: map_at_10 value: 39.871 - type: map_at_100 value: 41.309000000000005 - type: map_at_1000 value: 41.409 - type: map_at_3 value: 36.047000000000004 - type: map_at_5 value: 38.104 - type: mrr_at_1 value: 34.703 - type: mrr_at_10 value: 44.773 - type: mrr_at_100 value: 45.64 - type: mrr_at_1000 value: 45.678999999999995 - type: mrr_at_3 value: 41.705 - type: mrr_at_5 value: 43.406 - type: ndcg_at_1 value: 34.703 - type: ndcg_at_10 value: 46.271 - type: ndcg_at_100 value: 52.037 - type: ndcg_at_1000 value: 53.81700000000001 - type: ndcg_at_3 value: 39.966 - type: ndcg_at_5 value: 42.801 - type: precision_at_1 value: 34.703 - type: precision_at_10 value: 8.744 - type: precision_at_100 value: 1.348 - type: precision_at_1000 value: 0.167 - type: precision_at_3 value: 19.102 - type: precision_at_5 value: 13.836 - type: recall_at_1 value: 28.397 - type: recall_at_10 value: 60.299 - type: recall_at_100 value: 84.595 - type: recall_at_1000 value: 96.155 - type: recall_at_3 value: 43.065 - type: recall_at_5 value: 50.371 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackRetrieval revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 28.044333333333338 - type: map_at_10 value: 38.78691666666666 - type: map_at_100 value: 40.113 - type: map_at_1000 value: 40.22125 - type: map_at_3 value: 35.52966666666667 - type: map_at_5 value: 37.372749999999996 - type: mrr_at_1 value: 33.159083333333335 - type: mrr_at_10 value: 42.913583333333335 - type: mrr_at_100 value: 43.7845 - type: mrr_at_1000 value: 43.830333333333336 - type: mrr_at_3 value: 40.29816666666667 - type: mrr_at_5 value: 41.81366666666667 - type: ndcg_at_1 value: 33.159083333333335 - type: ndcg_at_10 value: 44.75750000000001 - type: ndcg_at_100 value: 50.13658333333334 - type: ndcg_at_1000 value: 52.037 - type: ndcg_at_3 value: 39.34258333333334 - type: ndcg_at_5 value: 41.93708333333333 - type: precision_at_1 value: 33.159083333333335 - type: precision_at_10 value: 7.952416666666667 - type: precision_at_100 value: 1.2571666666666668 - type: precision_at_1000 value: 0.16099999999999998 - type: precision_at_3 value: 18.303833333333337 - type: precision_at_5 value: 13.057083333333333 - type: recall_at_1 value: 28.044333333333338 - type: recall_at_10 value: 58.237249999999996 - type: recall_at_100 value: 81.35391666666666 - type: recall_at_1000 value: 94.21283333333334 - type: recall_at_3 value: 43.32341666666667 - type: recall_at_5 value: 49.94908333333333 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackStatsRetrieval revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 27.838 - type: map_at_10 value: 36.04 - type: map_at_100 value: 37.113 - type: map_at_1000 value: 37.204 - type: map_at_3 value: 33.585 - type: map_at_5 value: 34.845 - type: mrr_at_1 value: 30.982 - type: mrr_at_10 value: 39.105000000000004 - type: mrr_at_100 value: 39.98 - type: mrr_at_1000 value: 40.042 - type: mrr_at_3 value: 36.912 - type: mrr_at_5 value: 38.062000000000005 - type: ndcg_at_1 value: 30.982 - type: ndcg_at_10 value: 40.982 - type: ndcg_at_100 value: 46.092 - type: ndcg_at_1000 value: 48.25 - type: ndcg_at_3 value: 36.41 - type: ndcg_at_5 value: 38.379999999999995 - type: precision_at_1 value: 30.982 - type: precision_at_10 value: 6.534 - type: precision_at_100 value: 0.9820000000000001 - type: precision_at_1000 value: 0.124 - type: precision_at_3 value: 15.745999999999999 - type: precision_at_5 value: 10.828 - type: recall_at_1 value: 27.838 - type: recall_at_10 value: 52.971000000000004 - type: recall_at_100 value: 76.357 - type: recall_at_1000 value: 91.973 - type: recall_at_3 value: 40.157 - type: recall_at_5 value: 45.147999999999996 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackTexRetrieval revision: 46989137a86843e03a6195de44b09deda022eec7 split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 19.059 - type: map_at_10 value: 27.454 - type: map_at_100 value: 28.736 - type: map_at_1000 value: 28.865000000000002 - type: map_at_3 value: 24.773999999999997 - type: map_at_5 value: 26.266000000000002 - type: mrr_at_1 value: 23.125 - type: mrr_at_10 value: 31.267 - type: mrr_at_100 value: 32.32 - type: mrr_at_1000 value: 32.394 - type: mrr_at_3 value: 28.894 - type: mrr_at_5 value: 30.281000000000002 - type: ndcg_at_1 value: 23.125 - type: ndcg_at_10 value: 32.588 - type: ndcg_at_100 value: 38.432 - type: ndcg_at_1000 value: 41.214 - type: ndcg_at_3 value: 27.938000000000002 - type: ndcg_at_5 value: 30.127 - type: precision_at_1 value: 23.125 - type: precision_at_10 value: 5.9639999999999995 - type: precision_at_100 value: 1.047 - type: precision_at_1000 value: 0.148 - type: precision_at_3 value: 13.294 - type: precision_at_5 value: 9.628 - type: recall_at_1 value: 19.059 - type: recall_at_10 value: 44.25 - type: recall_at_100 value: 69.948 - type: recall_at_1000 value: 89.35300000000001 - type: recall_at_3 value: 31.114000000000004 - type: recall_at_5 value: 36.846000000000004 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackUnixRetrieval revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 28.355999999999998 - type: map_at_10 value: 39.055 - type: map_at_100 value: 40.486 - type: map_at_1000 value: 40.571 - type: map_at_3 value: 35.69 - type: map_at_5 value: 37.605 - type: mrr_at_1 value: 33.302 - type: mrr_at_10 value: 42.986000000000004 - type: mrr_at_100 value: 43.957 - type: mrr_at_1000 value: 43.996 - type: mrr_at_3 value: 40.111999999999995 - type: mrr_at_5 value: 41.735 - type: ndcg_at_1 value: 33.302 - type: ndcg_at_10 value: 44.962999999999994 - type: ndcg_at_100 value: 50.917 - type: ndcg_at_1000 value: 52.622 - type: ndcg_at_3 value: 39.182 - type: ndcg_at_5 value: 41.939 - type: precision_at_1 value: 33.302 - type: precision_at_10 value: 7.779999999999999 - type: precision_at_100 value: 1.203 - type: precision_at_1000 value: 0.145 - type: precision_at_3 value: 18.035 - type: precision_at_5 value: 12.873000000000001 - type: recall_at_1 value: 28.355999999999998 - type: recall_at_10 value: 58.782000000000004 - type: recall_at_100 value: 84.02199999999999 - type: recall_at_1000 value: 95.511 - type: recall_at_3 value: 43.126999999999995 - type: recall_at_5 value: 50.14999999999999 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackWebmastersRetrieval revision: 160c094312a0e1facb97e55eeddb698c0abe3571 split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 27.391 - type: map_at_10 value: 37.523 - type: map_at_100 value: 39.312000000000005 - type: map_at_1000 value: 39.54 - type: map_at_3 value: 34.231 - type: map_at_5 value: 36.062 - type: mrr_at_1 value: 32.016 - type: mrr_at_10 value: 41.747 - type: mrr_at_100 value: 42.812 - type: mrr_at_1000 value: 42.844 - type: mrr_at_3 value: 39.129999999999995 - type: mrr_at_5 value: 40.524 - type: ndcg_at_1 value: 32.016 - type: ndcg_at_10 value: 43.826 - type: ndcg_at_100 value: 50.373999999999995 - type: ndcg_at_1000 value: 52.318 - type: ndcg_at_3 value: 38.479 - type: ndcg_at_5 value: 40.944 - type: precision_at_1 value: 32.016 - type: precision_at_10 value: 8.280999999999999 - type: precision_at_100 value: 1.6760000000000002 - type: precision_at_1000 value: 0.25 - type: precision_at_3 value: 18.05 - type: precision_at_5 value: 13.083 - type: recall_at_1 value: 27.391 - type: recall_at_10 value: 56.928999999999995 - type: recall_at_100 value: 85.169 - type: recall_at_1000 value: 96.665 - type: recall_at_3 value: 42.264 - type: recall_at_5 value: 48.556 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackWordpressRetrieval revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 18.398 - type: map_at_10 value: 27.929 - type: map_at_100 value: 29.032999999999998 - type: map_at_1000 value: 29.126 - type: map_at_3 value: 25.070999999999998 - type: map_at_5 value: 26.583000000000002 - type: mrr_at_1 value: 19.963 - type: mrr_at_10 value: 29.997 - type: mrr_at_100 value: 30.9 - type: mrr_at_1000 value: 30.972 - type: mrr_at_3 value: 27.264 - type: mrr_at_5 value: 28.826 - type: ndcg_at_1 value: 19.963 - type: ndcg_at_10 value: 33.678999999999995 - type: ndcg_at_100 value: 38.931 - type: ndcg_at_1000 value: 41.379 - type: ndcg_at_3 value: 28.000000000000004 - type: ndcg_at_5 value: 30.637999999999998 - type: precision_at_1 value: 19.963 - type: precision_at_10 value: 5.7299999999999995 - type: precision_at_100 value: 0.902 - type: precision_at_1000 value: 0.122 - type: precision_at_3 value: 12.631 - type: precision_at_5 value: 9.057 - type: recall_at_1 value: 18.398 - type: recall_at_10 value: 49.254 - type: recall_at_100 value: 73.182 - type: recall_at_1000 value: 91.637 - type: recall_at_3 value: 34.06 - type: recall_at_5 value: 40.416000000000004 task: type: Retrieval - dataset: config: default name: MTEB ClimateFEVER revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 split: test type: mteb/climate-fever metrics: - type: map_at_1 value: 19.681 - type: map_at_10 value: 32.741 - type: map_at_100 value: 34.811 - type: map_at_1000 value: 35.003 - type: map_at_3 value: 27.697 - type: map_at_5 value: 30.372 - type: mrr_at_1 value: 44.951 - type: mrr_at_10 value: 56.34400000000001 - type: mrr_at_100 value: 56.961 - type: mrr_at_1000 value: 56.987 - type: mrr_at_3 value: 53.681 - type: mrr_at_5 value: 55.407 - type: ndcg_at_1 value: 44.951 - type: ndcg_at_10 value: 42.905 - type: ndcg_at_100 value: 49.95 - type: ndcg_at_1000 value: 52.917 - type: ndcg_at_3 value: 36.815 - type: ndcg_at_5 value: 38.817 - type: precision_at_1 value: 44.951 - type: precision_at_10 value: 12.989999999999998 - type: precision_at_100 value: 2.068 - type: precision_at_1000 value: 0.263 - type: precision_at_3 value: 27.275 - type: precision_at_5 value: 20.365 - type: recall_at_1 value: 19.681 - type: recall_at_10 value: 48.272999999999996 - type: recall_at_100 value: 71.87400000000001 - type: recall_at_1000 value: 87.929 - type: recall_at_3 value: 32.653999999999996 - type: recall_at_5 value: 39.364 task: type: Retrieval - dataset: config: default name: MTEB DBPedia revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 split: test type: mteb/dbpedia metrics: - type: map_at_1 value: 10.231 - type: map_at_10 value: 22.338 - type: map_at_100 value: 31.927 - type: map_at_1000 value: 33.87 - type: map_at_3 value: 15.559999999999999 - type: map_at_5 value: 18.239 - type: mrr_at_1 value: 75.0 - type: mrr_at_10 value: 81.303 - type: mrr_at_100 value: 81.523 - type: mrr_at_1000 value: 81.53 - type: mrr_at_3 value: 80.083 - type: mrr_at_5 value: 80.758 - type: ndcg_at_1 value: 64.625 - type: ndcg_at_10 value: 48.687000000000005 - type: ndcg_at_100 value: 52.791 - type: ndcg_at_1000 value: 60.041999999999994 - type: ndcg_at_3 value: 53.757999999999996 - type: ndcg_at_5 value: 50.76500000000001 - type: precision_at_1 value: 75.0 - type: precision_at_10 value: 38.3 - type: precision_at_100 value: 12.025 - type: precision_at_1000 value: 2.3970000000000002 - type: precision_at_3 value: 55.417 - type: precision_at_5 value: 47.5 - type: recall_at_1 value: 10.231 - type: recall_at_10 value: 27.697 - type: recall_at_100 value: 57.409 - type: recall_at_1000 value: 80.547 - type: recall_at_3 value: 16.668 - type: recall_at_5 value: 20.552 task: type: Retrieval - dataset: config: default name: MTEB EmotionClassification revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 split: test type: mteb/emotion metrics: - type: accuracy value: 61.365 - type: f1 value: 56.7540827912991 task: type: Classification - dataset: config: default name: MTEB FEVER revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 split: test type: mteb/fever metrics: - type: map_at_1 value: 83.479 - type: map_at_10 value: 88.898 - type: map_at_100 value: 89.11 - type: map_at_1000 value: 89.12400000000001 - type: map_at_3 value: 88.103 - type: map_at_5 value: 88.629 - type: mrr_at_1 value: 89.934 - type: mrr_at_10 value: 93.91000000000001 - type: mrr_at_100 value: 93.937 - type: mrr_at_1000 value: 93.938 - type: mrr_at_3 value: 93.62700000000001 - type: mrr_at_5 value: 93.84599999999999 - type: ndcg_at_1 value: 89.934 - type: ndcg_at_10 value: 91.574 - type: ndcg_at_100 value: 92.238 - type: ndcg_at_1000 value: 92.45 - type: ndcg_at_3 value: 90.586 - type: ndcg_at_5 value: 91.16300000000001 - type: precision_at_1 value: 89.934 - type: precision_at_10 value: 10.555 - type: precision_at_100 value: 1.1159999999999999 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 33.588 - type: precision_at_5 value: 20.642 - type: recall_at_1 value: 83.479 - type: recall_at_10 value: 94.971 - type: recall_at_100 value: 97.397 - type: recall_at_1000 value: 98.666 - type: recall_at_3 value: 92.24799999999999 - type: recall_at_5 value: 93.797 task: type: Retrieval - dataset: config: default name: MTEB FiQA2018 revision: 27a168819829fe9bcd655c2df245fb19452e8e06 split: test type: mteb/fiqa metrics: - type: map_at_1 value: 27.16 - type: map_at_10 value: 45.593 - type: map_at_100 value: 47.762 - type: map_at_1000 value: 47.899 - type: map_at_3 value: 39.237 - type: map_at_5 value: 42.970000000000006 - type: mrr_at_1 value: 52.623 - type: mrr_at_10 value: 62.637 - type: mrr_at_100 value: 63.169 - type: mrr_at_1000 value: 63.185 - type: mrr_at_3 value: 59.928000000000004 - type: mrr_at_5 value: 61.702999999999996 - type: ndcg_at_1 value: 52.623 - type: ndcg_at_10 value: 54.701 - type: ndcg_at_100 value: 61.263 - type: ndcg_at_1000 value: 63.134 - type: ndcg_at_3 value: 49.265 - type: ndcg_at_5 value: 51.665000000000006 - type: precision_at_1 value: 52.623 - type: precision_at_10 value: 15.185 - type: precision_at_100 value: 2.202 - type: precision_at_1000 value: 0.254 - type: precision_at_3 value: 32.767 - type: precision_at_5 value: 24.722 - type: recall_at_1 value: 27.16 - type: recall_at_10 value: 63.309000000000005 - type: recall_at_100 value: 86.722 - type: recall_at_1000 value: 97.505 - type: recall_at_3 value: 45.045 - type: recall_at_5 value: 54.02400000000001 task: type: Retrieval - dataset: config: default name: MTEB HotpotQA revision: ab518f4d6fcca38d87c25209f94beba119d02014 split: test type: mteb/hotpotqa metrics: - type: map_at_1 value: 42.573 - type: map_at_10 value: 59.373 - type: map_at_100 value: 60.292 - type: map_at_1000 value: 60.358999999999995 - type: map_at_3 value: 56.159000000000006 - type: map_at_5 value: 58.123999999999995 - type: mrr_at_1 value: 85.14500000000001 - type: mrr_at_10 value: 89.25999999999999 - type: mrr_at_100 value: 89.373 - type: mrr_at_1000 value: 89.377 - type: mrr_at_3 value: 88.618 - type: mrr_at_5 value: 89.036 - type: ndcg_at_1 value: 85.14500000000001 - type: ndcg_at_10 value: 68.95 - type: ndcg_at_100 value: 71.95 - type: ndcg_at_1000 value: 73.232 - type: ndcg_at_3 value: 64.546 - type: ndcg_at_5 value: 66.945 - type: precision_at_1 value: 85.14500000000001 - type: precision_at_10 value: 13.865 - type: precision_at_100 value: 1.619 - type: precision_at_1000 value: 0.179 - type: precision_at_3 value: 39.703 - type: precision_at_5 value: 25.718000000000004 - type: recall_at_1 value: 42.573 - type: recall_at_10 value: 69.325 - type: recall_at_100 value: 80.932 - type: recall_at_1000 value: 89.446 - type: recall_at_3 value: 59.553999999999995 - type: recall_at_5 value: 64.294 task: type: Retrieval - dataset: config: default name: MTEB ImdbClassification revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 split: test type: mteb/imdb metrics: - type: accuracy value: 95.8336 - type: ap value: 93.78862962194073 - type: f1 value: 95.83192650728371 task: type: Classification - dataset: config: default name: MTEB MSMARCO revision: c5a29a104738b98a9e76336939199e264163d4a0 split: dev type: mteb/msmarco metrics: - type: map_at_1 value: 23.075000000000003 - type: map_at_10 value: 36.102000000000004 - type: map_at_100 value: 37.257 - type: map_at_1000 value: 37.3 - type: map_at_3 value: 32.144 - type: map_at_5 value: 34.359 - type: mrr_at_1 value: 23.711 - type: mrr_at_10 value: 36.671 - type: mrr_at_100 value: 37.763999999999996 - type: mrr_at_1000 value: 37.801 - type: mrr_at_3 value: 32.775 - type: mrr_at_5 value: 34.977000000000004 - type: ndcg_at_1 value: 23.711 - type: ndcg_at_10 value: 43.361 - type: ndcg_at_100 value: 48.839 - type: ndcg_at_1000 value: 49.88 - type: ndcg_at_3 value: 35.269 - type: ndcg_at_5 value: 39.224 - type: precision_at_1 value: 23.711 - type: precision_at_10 value: 6.866999999999999 - type: precision_at_100 value: 0.96 - type: precision_at_1000 value: 0.105 - type: precision_at_3 value: 15.096000000000002 - type: precision_at_5 value: 11.083 - type: recall_at_1 value: 23.075000000000003 - type: recall_at_10 value: 65.756 - type: recall_at_100 value: 90.88199999999999 - type: recall_at_1000 value: 98.739 - type: recall_at_3 value: 43.691 - type: recall_at_5 value: 53.15800000000001 task: type: Retrieval - dataset: config: en name: MTEB MTOPDomainClassification (en) revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf split: test type: mteb/mtop_domain metrics: - type: accuracy value: 97.69493844049248 - type: f1 value: 97.55048089616261 task: type: Classification - dataset: config: en name: MTEB MTOPIntentClassification (en) revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba split: test type: mteb/mtop_intent metrics: - type: accuracy value: 88.75968992248062 - type: f1 value: 72.26321223399123 task: type: Classification - dataset: config: en name: MTEB MassiveIntentClassification (en) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 82.40080699394754 - type: f1 value: 79.62590029057968 task: type: Classification - dataset: config: en name: MTEB MassiveScenarioClassification (en) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 84.49562878278414 - type: f1 value: 84.0040193313333 task: type: Classification - dataset: config: default name: MTEB MedrxivClusteringP2P revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 split: test type: mteb/medrxiv-clustering-p2p metrics: - type: v_measure value: 39.386760057101945 task: type: Clustering - dataset: config: default name: MTEB MedrxivClusteringS2S revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 split: test type: mteb/medrxiv-clustering-s2s metrics: - type: v_measure value: 37.89687154075537 task: type: Clustering - dataset: config: default name: MTEB MindSmallReranking revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 split: test type: mteb/mind_small metrics: - type: map value: 33.94151656057482 - type: mrr value: 35.32684700746953 task: type: Reranking - dataset: config: default name: MTEB NFCorpus revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 split: test type: mteb/nfcorpus metrics: - type: map_at_1 value: 6.239999999999999 - type: map_at_10 value: 14.862 - type: map_at_100 value: 18.955 - type: map_at_1000 value: 20.694000000000003 - type: map_at_3 value: 10.683 - type: map_at_5 value: 12.674 - type: mrr_at_1 value: 50.15500000000001 - type: mrr_at_10 value: 59.697 - type: mrr_at_100 value: 60.095 - type: mrr_at_1000 value: 60.129999999999995 - type: mrr_at_3 value: 58.35900000000001 - type: mrr_at_5 value: 58.839 - type: ndcg_at_1 value: 48.452 - type: ndcg_at_10 value: 39.341 - type: ndcg_at_100 value: 35.866 - type: ndcg_at_1000 value: 45.111000000000004 - type: ndcg_at_3 value: 44.527 - type: ndcg_at_5 value: 42.946 - type: precision_at_1 value: 50.15500000000001 - type: precision_at_10 value: 29.536 - type: precision_at_100 value: 9.142 - type: precision_at_1000 value: 2.2849999999999997 - type: precision_at_3 value: 41.899 - type: precision_at_5 value: 37.647000000000006 - type: recall_at_1 value: 6.239999999999999 - type: recall_at_10 value: 19.278000000000002 - type: recall_at_100 value: 36.074 - type: recall_at_1000 value: 70.017 - type: recall_at_3 value: 12.066 - type: recall_at_5 value: 15.254000000000001 task: type: Retrieval - dataset: config: default name: MTEB NQ revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 split: test type: mteb/nq metrics: - type: map_at_1 value: 39.75 - type: map_at_10 value: 56.443 - type: map_at_100 value: 57.233999999999995 - type: map_at_1000 value: 57.249 - type: map_at_3 value: 52.032999999999994 - type: map_at_5 value: 54.937999999999995 - type: mrr_at_1 value: 44.728 - type: mrr_at_10 value: 58.939 - type: mrr_at_100 value: 59.489000000000004 - type: mrr_at_1000 value: 59.499 - type: mrr_at_3 value: 55.711999999999996 - type: mrr_at_5 value: 57.89 - type: ndcg_at_1 value: 44.728 - type: ndcg_at_10 value: 63.998999999999995 - type: ndcg_at_100 value: 67.077 - type: ndcg_at_1000 value: 67.40899999999999 - type: ndcg_at_3 value: 56.266000000000005 - type: ndcg_at_5 value: 60.88 - type: precision_at_1 value: 44.728 - type: precision_at_10 value: 10.09 - type: precision_at_100 value: 1.1809999999999998 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 25.145 - type: precision_at_5 value: 17.822 - type: recall_at_1 value: 39.75 - type: recall_at_10 value: 84.234 - type: recall_at_100 value: 97.055 - type: recall_at_1000 value: 99.517 - type: recall_at_3 value: 64.851 - type: recall_at_5 value: 75.343 task: type: Retrieval - dataset: config: default name: MTEB QuoraRetrieval revision: None split: test type: mteb/quora metrics: - type: map_at_1 value: 72.085 - type: map_at_10 value: 86.107 - type: map_at_100 value: 86.727 - type: map_at_1000 value: 86.74 - type: map_at_3 value: 83.21 - type: map_at_5 value: 85.06 - type: mrr_at_1 value: 82.94 - type: mrr_at_10 value: 88.845 - type: mrr_at_100 value: 88.926 - type: mrr_at_1000 value: 88.927 - type: mrr_at_3 value: 87.993 - type: mrr_at_5 value: 88.62299999999999 - type: ndcg_at_1 value: 82.97 - type: ndcg_at_10 value: 89.645 - type: ndcg_at_100 value: 90.717 - type: ndcg_at_1000 value: 90.78 - type: ndcg_at_3 value: 86.99900000000001 - type: ndcg_at_5 value: 88.52600000000001 - type: precision_at_1 value: 82.97 - type: precision_at_10 value: 13.569 - type: precision_at_100 value: 1.539 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 38.043 - type: precision_at_5 value: 24.992 - type: recall_at_1 value: 72.085 - type: recall_at_10 value: 96.262 - type: recall_at_100 value: 99.77000000000001 - type: recall_at_1000 value: 99.997 - type: recall_at_3 value: 88.652 - type: recall_at_5 value: 93.01899999999999 task: type: Retrieval - dataset: config: default name: MTEB RedditClustering revision: 24640382cdbf8abc73003fb0fa6d111a705499eb split: test type: mteb/reddit-clustering metrics: - type: v_measure value: 55.82153952668092 task: type: Clustering - dataset: config: default name: MTEB RedditClusteringP2P revision: 282350215ef01743dc01b456c7f5241fa8937f16 split: test type: mteb/reddit-clustering-p2p metrics: - type: v_measure value: 62.094465801879295 task: type: Clustering - dataset: config: default name: MTEB SCIDOCS revision: None split: test type: mteb/scidocs metrics: - type: map_at_1 value: 5.688 - type: map_at_10 value: 15.201999999999998 - type: map_at_100 value: 18.096 - type: map_at_1000 value: 18.481 - type: map_at_3 value: 10.734 - type: map_at_5 value: 12.94 - type: mrr_at_1 value: 28.000000000000004 - type: mrr_at_10 value: 41.101 - type: mrr_at_100 value: 42.202 - type: mrr_at_1000 value: 42.228 - type: mrr_at_3 value: 37.683 - type: mrr_at_5 value: 39.708 - type: ndcg_at_1 value: 28.000000000000004 - type: ndcg_at_10 value: 24.976000000000003 - type: ndcg_at_100 value: 35.129 - type: ndcg_at_1000 value: 40.77 - type: ndcg_at_3 value: 23.787 - type: ndcg_at_5 value: 20.816000000000003 - type: precision_at_1 value: 28.000000000000004 - type: precision_at_10 value: 13.04 - type: precision_at_100 value: 2.761 - type: precision_at_1000 value: 0.41000000000000003 - type: precision_at_3 value: 22.6 - type: precision_at_5 value: 18.52 - type: recall_at_1 value: 5.688 - type: recall_at_10 value: 26.43 - type: recall_at_100 value: 56.02 - type: recall_at_1000 value: 83.21 - type: recall_at_3 value: 13.752 - type: recall_at_5 value: 18.777 task: type: Retrieval - dataset: config: default name: MTEB SICK-R revision: a6ea5a8cab320b040a23452cc28066d9beae2cee split: test type: mteb/sickr-sts metrics: - type: cos_sim_pearson value: 85.15084859283178 - type: cos_sim_spearman value: 80.49030614009419 - type: euclidean_pearson value: 81.84574978672468 - type: euclidean_spearman value: 79.89787150656818 - type: manhattan_pearson value: 81.63076538567131 - type: manhattan_spearman value: 79.69867352121841 task: type: STS - dataset: config: default name: MTEB STS12 revision: a0d554a64d88156834ff5ae9920b964011b16384 split: test type: mteb/sts12-sts metrics: - type: cos_sim_pearson value: 84.64097921490992 - type: cos_sim_spearman value: 77.25370084896514 - type: euclidean_pearson value: 82.71210826468788 - type: euclidean_spearman value: 78.50445584994826 - type: manhattan_pearson value: 82.92580164330298 - type: manhattan_spearman value: 78.69686891301019 task: type: STS - dataset: config: default name: MTEB STS13 revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca split: test type: mteb/sts13-sts metrics: - type: cos_sim_pearson value: 87.24596417308994 - type: cos_sim_spearman value: 87.79454220555091 - type: euclidean_pearson value: 87.40242561671164 - type: euclidean_spearman value: 88.25955597373556 - type: manhattan_pearson value: 87.25160240485849 - type: manhattan_spearman value: 88.155794979818 task: type: STS - dataset: config: default name: MTEB STS14 revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 split: test type: mteb/sts14-sts metrics: - type: cos_sim_pearson value: 84.44914233422564 - type: cos_sim_spearman value: 82.91015471820322 - type: euclidean_pearson value: 84.7206656630327 - type: euclidean_spearman value: 83.86408872059216 - type: manhattan_pearson value: 84.72816725158454 - type: manhattan_spearman value: 84.01603388572788 task: type: STS - dataset: config: default name: MTEB STS15 revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 split: test type: mteb/sts15-sts metrics: - type: cos_sim_pearson value: 87.6168026237477 - type: cos_sim_spearman value: 88.45414278092397 - type: euclidean_pearson value: 88.57023240882022 - type: euclidean_spearman value: 89.04102190922094 - type: manhattan_pearson value: 88.66695535796354 - type: manhattan_spearman value: 89.19898476680969 task: type: STS - dataset: config: default name: MTEB STS16 revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 split: test type: mteb/sts16-sts metrics: - type: cos_sim_pearson value: 84.27925826089424 - type: cos_sim_spearman value: 85.45291099550461 - type: euclidean_pearson value: 83.63853036580834 - type: euclidean_spearman value: 84.33468035821484 - type: manhattan_pearson value: 83.72778773251596 - type: manhattan_spearman value: 84.51583132445376 task: type: STS - dataset: config: en-en name: MTEB STS17 (en-en) revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d split: test type: mteb/sts17-crosslingual-sts metrics: - type: cos_sim_pearson value: 89.67375185692552 - type: cos_sim_spearman value: 90.32542469203855 - type: euclidean_pearson value: 89.63513717951847 - type: euclidean_spearman value: 89.87760271003745 - type: manhattan_pearson value: 89.28381452982924 - type: manhattan_spearman value: 89.53568197785721 task: type: STS - dataset: config: en name: MTEB STS22 (en) revision: eea2b4fe26a775864c896887d910b76a8098ad3f split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_pearson value: 66.24644693819846 - type: cos_sim_spearman value: 66.09889420525377 - type: euclidean_pearson value: 63.72551583520747 - type: euclidean_spearman value: 63.01385470780679 - type: manhattan_pearson value: 64.09258157214097 - type: manhattan_spearman value: 63.080517752822594 task: type: STS - dataset: config: default name: MTEB STSBenchmark revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 split: test type: mteb/stsbenchmark-sts metrics: - type: cos_sim_pearson value: 86.27321463839989 - type: cos_sim_spearman value: 86.37572865993327 - type: euclidean_pearson value: 86.36268020198149 - type: euclidean_spearman value: 86.31089339478922 - type: manhattan_pearson value: 86.4260445761947 - type: manhattan_spearman value: 86.45885895320457 task: type: STS - dataset: config: default name: MTEB SciDocsRR revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab split: test type: mteb/scidocs-reranking metrics: - type: map value: 86.52456702387798 - type: mrr value: 96.34556529164372 task: type: Reranking - dataset: config: default name: MTEB SciFact revision: 0228b52cf27578f30900b9e5271d331663a030d7 split: test type: mteb/scifact metrics: - type: map_at_1 value: 61.99400000000001 - type: map_at_10 value: 73.38799999999999 - type: map_at_100 value: 73.747 - type: map_at_1000 value: 73.75 - type: map_at_3 value: 70.04599999999999 - type: map_at_5 value: 72.095 - type: mrr_at_1 value: 65.0 - type: mrr_at_10 value: 74.42800000000001 - type: mrr_at_100 value: 74.722 - type: mrr_at_1000 value: 74.725 - type: mrr_at_3 value: 72.056 - type: mrr_at_5 value: 73.60600000000001 - type: ndcg_at_1 value: 65.0 - type: ndcg_at_10 value: 78.435 - type: ndcg_at_100 value: 79.922 - type: ndcg_at_1000 value: 80.00500000000001 - type: ndcg_at_3 value: 73.05199999999999 - type: ndcg_at_5 value: 75.98 - type: precision_at_1 value: 65.0 - type: precision_at_10 value: 10.5 - type: precision_at_100 value: 1.123 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 28.555999999999997 - type: precision_at_5 value: 19.0 - type: recall_at_1 value: 61.99400000000001 - type: recall_at_10 value: 92.72200000000001 - type: recall_at_100 value: 99.333 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 78.739 - type: recall_at_5 value: 85.828 task: type: Retrieval - dataset: config: default name: MTEB SprintDuplicateQuestions revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 split: test type: mteb/sprintduplicatequestions-pairclassification metrics: - type: cos_sim_accuracy value: 99.79009900990098 - type: cos_sim_ap value: 95.3203137438653 - type: cos_sim_f1 value: 89.12386706948641 - type: cos_sim_precision value: 89.75659229208925 - type: cos_sim_recall value: 88.5 - type: dot_accuracy value: 99.67821782178218 - type: dot_ap value: 89.94069840000675 - type: dot_f1 value: 83.45902463549521 - type: dot_precision value: 83.9231547017189 - type: dot_recall value: 83.0 - type: euclidean_accuracy value: 99.78613861386138 - type: euclidean_ap value: 95.10648259135526 - type: euclidean_f1 value: 88.77338877338877 - type: euclidean_precision value: 92.42424242424242 - type: euclidean_recall value: 85.39999999999999 - type: manhattan_accuracy value: 99.7950495049505 - type: manhattan_ap value: 95.29987661320946 - type: manhattan_f1 value: 89.21313183949972 - type: manhattan_precision value: 93.14472252448314 - type: manhattan_recall value: 85.6 - type: max_accuracy value: 99.7950495049505 - type: max_ap value: 95.3203137438653 - type: max_f1 value: 89.21313183949972 task: type: PairClassification - dataset: config: default name: MTEB StackExchangeClustering revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 split: test type: mteb/stackexchange-clustering metrics: - type: v_measure value: 67.65446577183913 task: type: Clustering - dataset: config: default name: MTEB StackExchangeClusteringP2P revision: 815ca46b2622cec33ccafc3735d572c266efdb44 split: test type: mteb/stackexchange-clustering-p2p metrics: - type: v_measure value: 46.30749237193961 task: type: Clustering - dataset: config: default name: MTEB StackOverflowDupQuestions revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 split: test type: mteb/stackoverflowdupquestions-reranking metrics: - type: map value: 54.91481849959949 - type: mrr value: 55.853506175197346 task: type: Reranking - dataset: config: default name: MTEB SummEval revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c split: test type: mteb/summeval metrics: - type: cos_sim_pearson value: 30.08196549170419 - type: cos_sim_spearman value: 31.16661390597077 - type: dot_pearson value: 29.892258410943466 - type: dot_spearman value: 30.51328811965085 task: type: Summarization - dataset: config: default name: MTEB TRECCOVID revision: None split: test type: mteb/trec-covid metrics: - type: map_at_1 value: 0.23900000000000002 - type: map_at_10 value: 2.173 - type: map_at_100 value: 14.24 - type: map_at_1000 value: 35.309000000000005 - type: map_at_3 value: 0.7100000000000001 - type: map_at_5 value: 1.163 - type: mrr_at_1 value: 92.0 - type: mrr_at_10 value: 96.0 - type: mrr_at_100 value: 96.0 - type: mrr_at_1000 value: 96.0 - type: mrr_at_3 value: 96.0 - type: mrr_at_5 value: 96.0 - type: ndcg_at_1 value: 90.0 - type: ndcg_at_10 value: 85.382 - type: ndcg_at_100 value: 68.03 - type: ndcg_at_1000 value: 61.021 - type: ndcg_at_3 value: 89.765 - type: ndcg_at_5 value: 88.444 - type: precision_at_1 value: 92.0 - type: precision_at_10 value: 88.0 - type: precision_at_100 value: 70.02000000000001 - type: precision_at_1000 value: 26.984 - type: precision_at_3 value: 94.0 - type: precision_at_5 value: 92.80000000000001 - type: recall_at_1 value: 0.23900000000000002 - type: recall_at_10 value: 2.313 - type: recall_at_100 value: 17.049 - type: recall_at_1000 value: 57.489999999999995 - type: recall_at_3 value: 0.737 - type: recall_at_5 value: 1.221 task: type: Retrieval - dataset: config: default name: MTEB Touche2020 revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f split: test type: mteb/touche2020 metrics: - type: map_at_1 value: 2.75 - type: map_at_10 value: 11.29 - type: map_at_100 value: 18.032999999999998 - type: map_at_1000 value: 19.746 - type: map_at_3 value: 6.555 - type: map_at_5 value: 8.706999999999999 - type: mrr_at_1 value: 34.694 - type: mrr_at_10 value: 50.55 - type: mrr_at_100 value: 51.659 - type: mrr_at_1000 value: 51.659 - type: mrr_at_3 value: 47.278999999999996 - type: mrr_at_5 value: 49.728 - type: ndcg_at_1 value: 32.653 - type: ndcg_at_10 value: 27.894000000000002 - type: ndcg_at_100 value: 39.769 - type: ndcg_at_1000 value: 51.495999999999995 - type: ndcg_at_3 value: 32.954 - type: ndcg_at_5 value: 31.502999999999997 - type: precision_at_1 value: 34.694 - type: precision_at_10 value: 23.265 - type: precision_at_100 value: 7.898 - type: precision_at_1000 value: 1.58 - type: precision_at_3 value: 34.694 - type: precision_at_5 value: 31.429000000000002 - type: recall_at_1 value: 2.75 - type: recall_at_10 value: 16.953 - type: recall_at_100 value: 48.68 - type: recall_at_1000 value: 85.18599999999999 - type: recall_at_3 value: 7.710999999999999 - type: recall_at_5 value: 11.484 task: type: Retrieval - dataset: config: default name: MTEB ToxicConversationsClassification revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c split: test type: mteb/toxic_conversations_50k metrics: - type: accuracy value: 82.66099999999999 - type: ap value: 25.555698090238337 - type: f1 value: 66.48402012461622 task: type: Classification - dataset: config: default name: MTEB TweetSentimentExtractionClassification revision: d604517c81ca91fe16a244d1248fc021f9ecee7a split: test type: mteb/tweet_sentiment_extraction metrics: - type: accuracy value: 72.94567062818335 - type: f1 value: 73.28139189595674 task: type: Classification - dataset: config: default name: MTEB TwentyNewsgroupsClustering revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 split: test type: mteb/twentynewsgroups-clustering metrics: - type: v_measure value: 49.581627240203474 task: type: Clustering - dataset: config: default name: MTEB TwitterSemEval2015 revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 split: test type: mteb/twittersemeval2015-pairclassification metrics: - type: cos_sim_accuracy value: 87.78089050485785 - type: cos_sim_ap value: 79.64487116574168 - type: cos_sim_f1 value: 72.46563021970964 - type: cos_sim_precision value: 70.62359128474831 - type: cos_sim_recall value: 74.40633245382587 - type: dot_accuracy value: 86.2609524944865 - type: dot_ap value: 75.513046857613 - type: dot_f1 value: 68.58213616489695 - type: dot_precision value: 65.12455516014235 - type: dot_recall value: 72.42744063324538 - type: euclidean_accuracy value: 87.6080348095607 - type: euclidean_ap value: 79.00204933649795 - type: euclidean_f1 value: 72.14495342605589 - type: euclidean_precision value: 69.85421299728193 - type: euclidean_recall value: 74.5910290237467 - type: manhattan_accuracy value: 87.59611372712642 - type: manhattan_ap value: 78.78523756706264 - type: manhattan_f1 value: 71.86499137718648 - type: manhattan_precision value: 67.39833641404806 - type: manhattan_recall value: 76.96569920844327 - type: max_accuracy value: 87.78089050485785 - type: max_ap value: 79.64487116574168 - type: max_f1 value: 72.46563021970964 task: type: PairClassification - dataset: config: default name: MTEB TwitterURLCorpus revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf split: test type: mteb/twitterurlcorpus-pairclassification metrics: - type: cos_sim_accuracy value: 89.98719292117825 - type: cos_sim_ap value: 87.58146137353202 - type: cos_sim_f1 value: 80.28543232369239 - type: cos_sim_precision value: 79.1735289714029 - type: cos_sim_recall value: 81.42901139513397 - type: dot_accuracy value: 88.9199363526992 - type: dot_ap value: 84.98499998630417 - type: dot_f1 value: 78.21951400757969 - type: dot_precision value: 75.58523624874336 - type: dot_recall value: 81.04404065291038 - type: euclidean_accuracy value: 89.77374160748244 - type: euclidean_ap value: 87.35151562835209 - type: euclidean_f1 value: 79.92160922940393 - type: euclidean_precision value: 76.88531587933979 - type: euclidean_recall value: 83.20757622420696 - type: manhattan_accuracy value: 89.72717041176699 - type: manhattan_ap value: 87.34065592142515 - type: manhattan_f1 value: 79.85603419187943 - type: manhattan_precision value: 77.82243332115455 - type: manhattan_recall value: 81.99876809362489 - type: max_accuracy value: 89.98719292117825 - type: max_ap value: 87.58146137353202 - type: max_f1 value: 80.28543232369239 task: type: PairClassification - dataset: config: default name: MTEB AFQMC revision: b44c3b011063adb25877c13823db83bb193913c4 split: validation type: C-MTEB/AFQMC metrics: - type: cos_sim_pearson value: 53.45954203592337 - type: cos_sim_spearman value: 58.42154680418638 - type: euclidean_pearson value: 56.41543791722753 - type: euclidean_spearman value: 58.39328016640146 - type: manhattan_pearson value: 56.318510356833876 - type: manhattan_spearman value: 58.28423447818184 task: type: STS - dataset: config: default name: MTEB ATEC revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865 split: test type: C-MTEB/ATEC metrics: - type: cos_sim_pearson value: 50.78356460675945 - type: cos_sim_spearman value: 55.6530411663269 - type: euclidean_pearson value: 56.50763660417816 - type: euclidean_spearman value: 55.733823335669065 - type: manhattan_pearson value: 56.45323093512866 - type: manhattan_spearman value: 55.63248619032702 task: type: STS - dataset: config: zh name: MTEB AmazonReviewsClassification (zh) revision: 1399c76144fd37290681b995c656ef9b2e06e26d split: test type: mteb/amazon_reviews_multi metrics: - type: accuracy value: 47.209999999999994 - type: f1 value: 46.08892432018655 task: type: Classification - dataset: config: default name: MTEB BQ revision: e3dda5e115e487b39ec7e618c0c6a29137052a55 split: test type: C-MTEB/BQ metrics: - type: cos_sim_pearson value: 70.25573992001478 - type: cos_sim_spearman value: 73.85247134951433 - type: euclidean_pearson value: 72.60033082168442 - type: euclidean_spearman value: 73.72445893756499 - type: manhattan_pearson value: 72.59932284620231 - type: manhattan_spearman value: 73.68002490614583 task: type: STS - dataset: config: default name: MTEB CLSClusteringP2P revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476 split: test type: C-MTEB/CLSClusteringP2P metrics: - type: v_measure value: 45.21317724305628 task: type: Clustering - dataset: config: default name: MTEB CLSClusteringS2S revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f split: test type: C-MTEB/CLSClusteringS2S metrics: - type: v_measure value: 42.49825170976724 task: type: Clustering - dataset: config: default name: MTEB CMedQAv1 revision: 8d7f1e942507dac42dc58017c1a001c3717da7df split: test type: C-MTEB/CMedQAv1-reranking metrics: - type: map value: 88.15661686810597 - type: mrr value: 90.11222222222223 task: type: Reranking - dataset: config: default name: MTEB CMedQAv2 revision: 23d186750531a14a0357ca22cd92d712fd512ea0 split: test type: C-MTEB/CMedQAv2-reranking metrics: - type: map value: 88.1204726064383 - type: mrr value: 90.20142857142858 task: type: Reranking - dataset: config: default name: MTEB CmedqaRetrieval revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301 split: dev type: C-MTEB/CmedqaRetrieval metrics: - type: map_at_1 value: 27.224999999999998 - type: map_at_10 value: 40.169 - type: map_at_100 value: 42.0 - type: map_at_1000 value: 42.109 - type: map_at_3 value: 35.76 - type: map_at_5 value: 38.221 - type: mrr_at_1 value: 40.56 - type: mrr_at_10 value: 49.118 - type: mrr_at_100 value: 50.092999999999996 - type: mrr_at_1000 value: 50.133 - type: mrr_at_3 value: 46.507 - type: mrr_at_5 value: 47.973 - type: ndcg_at_1 value: 40.56 - type: ndcg_at_10 value: 46.972 - type: ndcg_at_100 value: 54.04 - type: ndcg_at_1000 value: 55.862 - type: ndcg_at_3 value: 41.36 - type: ndcg_at_5 value: 43.704 - type: precision_at_1 value: 40.56 - type: precision_at_10 value: 10.302999999999999 - type: precision_at_100 value: 1.606 - type: precision_at_1000 value: 0.184 - type: precision_at_3 value: 23.064 - type: precision_at_5 value: 16.764000000000003 - type: recall_at_1 value: 27.224999999999998 - type: recall_at_10 value: 58.05200000000001 - type: recall_at_100 value: 87.092 - type: recall_at_1000 value: 99.099 - type: recall_at_3 value: 41.373 - type: recall_at_5 value: 48.453 task: type: Retrieval - dataset: config: default name: MTEB Cmnli revision: 41bc36f332156f7adc9e38f53777c959b2ae9766 split: validation type: C-MTEB/CMNLI metrics: - type: cos_sim_accuracy value: 77.40228502705953 - type: cos_sim_ap value: 86.22359172956327 - type: cos_sim_f1 value: 78.96328293736501 - type: cos_sim_precision value: 73.36945615091311 - type: cos_sim_recall value: 85.48047696983868 - type: dot_accuracy value: 75.53818400481059 - type: dot_ap value: 83.70164011305312 - type: dot_f1 value: 77.67298719348754 - type: dot_precision value: 67.49482401656314 - type: dot_recall value: 91.46598082768296 - type: euclidean_accuracy value: 77.94347564642213 - type: euclidean_ap value: 86.4652108728609 - type: euclidean_f1 value: 79.15555555555555 - type: euclidean_precision value: 75.41816641964853 - type: euclidean_recall value: 83.28267477203647 - type: manhattan_accuracy value: 77.45039085989175 - type: manhattan_ap value: 86.09986583900665 - type: manhattan_f1 value: 78.93669264438988 - type: manhattan_precision value: 72.63261296660117 - type: manhattan_recall value: 86.43909282207154 - type: max_accuracy value: 77.94347564642213 - type: max_ap value: 86.4652108728609 - type: max_f1 value: 79.15555555555555 task: type: PairClassification - dataset: config: default name: MTEB CovidRetrieval revision: 1271c7809071a13532e05f25fb53511ffce77117 split: dev type: C-MTEB/CovidRetrieval metrics: - type: map_at_1 value: 69.336 - type: map_at_10 value: 77.16 - type: map_at_100 value: 77.47500000000001 - type: map_at_1000 value: 77.482 - type: map_at_3 value: 75.42999999999999 - type: map_at_5 value: 76.468 - type: mrr_at_1 value: 69.44200000000001 - type: mrr_at_10 value: 77.132 - type: mrr_at_100 value: 77.43299999999999 - type: mrr_at_1000 value: 77.44 - type: mrr_at_3 value: 75.395 - type: mrr_at_5 value: 76.459 - type: ndcg_at_1 value: 69.547 - type: ndcg_at_10 value: 80.794 - type: ndcg_at_100 value: 82.245 - type: ndcg_at_1000 value: 82.40899999999999 - type: ndcg_at_3 value: 77.303 - type: ndcg_at_5 value: 79.168 - type: precision_at_1 value: 69.547 - type: precision_at_10 value: 9.305 - type: precision_at_100 value: 0.9979999999999999 - type: precision_at_1000 value: 0.101 - type: precision_at_3 value: 27.749000000000002 - type: precision_at_5 value: 17.576 - type: recall_at_1 value: 69.336 - type: recall_at_10 value: 92.097 - type: recall_at_100 value: 98.736 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 82.64 - type: recall_at_5 value: 87.144 task: type: Retrieval - dataset: config: default name: MTEB DuRetrieval revision: a1a333e290fe30b10f3f56498e3a0d911a693ced split: dev type: C-MTEB/DuRetrieval metrics: - type: map_at_1 value: 26.817999999999998 - type: map_at_10 value: 82.67 - type: map_at_100 value: 85.304 - type: map_at_1000 value: 85.334 - type: map_at_3 value: 57.336 - type: map_at_5 value: 72.474 - type: mrr_at_1 value: 91.45 - type: mrr_at_10 value: 94.272 - type: mrr_at_100 value: 94.318 - type: mrr_at_1000 value: 94.32000000000001 - type: mrr_at_3 value: 94.0 - type: mrr_at_5 value: 94.17699999999999 - type: ndcg_at_1 value: 91.45 - type: ndcg_at_10 value: 89.404 - type: ndcg_at_100 value: 91.724 - type: ndcg_at_1000 value: 91.973 - type: ndcg_at_3 value: 88.104 - type: ndcg_at_5 value: 87.25699999999999 - type: precision_at_1 value: 91.45 - type: precision_at_10 value: 42.585 - type: precision_at_100 value: 4.838 - type: precision_at_1000 value: 0.49 - type: precision_at_3 value: 78.8 - type: precision_at_5 value: 66.66 - type: recall_at_1 value: 26.817999999999998 - type: recall_at_10 value: 90.67 - type: recall_at_100 value: 98.36200000000001 - type: recall_at_1000 value: 99.583 - type: recall_at_3 value: 59.614999999999995 - type: recall_at_5 value: 77.05199999999999 task: type: Retrieval - dataset: config: default name: MTEB EcomRetrieval revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9 split: dev type: C-MTEB/EcomRetrieval metrics: - type: map_at_1 value: 47.699999999999996 - type: map_at_10 value: 57.589999999999996 - type: map_at_100 value: 58.226 - type: map_at_1000 value: 58.251 - type: map_at_3 value: 55.233 - type: map_at_5 value: 56.633 - type: mrr_at_1 value: 47.699999999999996 - type: mrr_at_10 value: 57.589999999999996 - type: mrr_at_100 value: 58.226 - type: mrr_at_1000 value: 58.251 - type: mrr_at_3 value: 55.233 - type: mrr_at_5 value: 56.633 - type: ndcg_at_1 value: 47.699999999999996 - type: ndcg_at_10 value: 62.505 - type: ndcg_at_100 value: 65.517 - type: ndcg_at_1000 value: 66.19800000000001 - type: ndcg_at_3 value: 57.643 - type: ndcg_at_5 value: 60.181 - type: precision_at_1 value: 47.699999999999996 - type: precision_at_10 value: 7.8 - type: precision_at_100 value: 0.919 - type: precision_at_1000 value: 0.097 - type: precision_at_3 value: 21.532999999999998 - type: precision_at_5 value: 14.16 - type: recall_at_1 value: 47.699999999999996 - type: recall_at_10 value: 78.0 - type: recall_at_100 value: 91.9 - type: recall_at_1000 value: 97.3 - type: recall_at_3 value: 64.60000000000001 - type: recall_at_5 value: 70.8 task: type: Retrieval - dataset: config: default name: MTEB IFlyTek revision: 421605374b29664c5fc098418fe20ada9bd55f8a split: validation type: C-MTEB/IFlyTek-classification metrics: - type: accuracy value: 44.84801846864178 - type: f1 value: 37.47347897956339 task: type: Classification - dataset: config: default name: MTEB JDReview revision: b7c64bd89eb87f8ded463478346f76731f07bf8b split: test type: C-MTEB/JDReview-classification metrics: - type: accuracy value: 85.81613508442777 - type: ap value: 52.68244615477374 - type: f1 value: 80.0445640948843 task: type: Classification - dataset: config: default name: MTEB LCQMC revision: 17f9b096f80380fce5ed12a9be8be7784b337daf split: test type: C-MTEB/LCQMC metrics: - type: cos_sim_pearson value: 69.57786502217138 - type: cos_sim_spearman value: 75.39106054489906 - type: euclidean_pearson value: 73.72082954602402 - type: euclidean_spearman value: 75.14421475913619 - type: manhattan_pearson value: 73.62463076633642 - type: manhattan_spearman value: 75.01301565104112 task: type: STS - dataset: config: default name: MTEB MMarcoReranking revision: None split: dev type: C-MTEB/Mmarco-reranking metrics: - type: map value: 29.143797057999134 - type: mrr value: 28.08174603174603 task: type: Reranking - dataset: config: default name: MTEB MMarcoRetrieval revision: 539bbde593d947e2a124ba72651aafc09eb33fc2 split: dev type: C-MTEB/MMarcoRetrieval metrics: - type: map_at_1 value: 70.492 - type: map_at_10 value: 79.501 - type: map_at_100 value: 79.728 - type: map_at_1000 value: 79.735 - type: map_at_3 value: 77.77 - type: map_at_5 value: 78.851 - type: mrr_at_1 value: 72.822 - type: mrr_at_10 value: 80.001 - type: mrr_at_100 value: 80.19 - type: mrr_at_1000 value: 80.197 - type: mrr_at_3 value: 78.484 - type: mrr_at_5 value: 79.42099999999999 - type: ndcg_at_1 value: 72.822 - type: ndcg_at_10 value: 83.013 - type: ndcg_at_100 value: 84.013 - type: ndcg_at_1000 value: 84.20400000000001 - type: ndcg_at_3 value: 79.728 - type: ndcg_at_5 value: 81.542 - type: precision_at_1 value: 72.822 - type: precision_at_10 value: 9.917 - type: precision_at_100 value: 1.042 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 29.847 - type: precision_at_5 value: 18.871 - type: recall_at_1 value: 70.492 - type: recall_at_10 value: 93.325 - type: recall_at_100 value: 97.822 - type: recall_at_1000 value: 99.319 - type: recall_at_3 value: 84.636 - type: recall_at_5 value: 88.93100000000001 task: type: Retrieval - dataset: config: zh-CN name: MTEB MassiveIntentClassification (zh-CN) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 76.88298587760592 - type: f1 value: 73.89001762017176 task: type: Classification - dataset: config: zh-CN name: MTEB MassiveScenarioClassification (zh-CN) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 80.76328177538669 - type: f1 value: 80.24718532423358 task: type: Classification - dataset: config: default name: MTEB MedicalRetrieval revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6 split: dev type: C-MTEB/MedicalRetrieval metrics: - type: map_at_1 value: 49.6 - type: map_at_10 value: 55.620999999999995 - type: map_at_100 value: 56.204 - type: map_at_1000 value: 56.251 - type: map_at_3 value: 54.132999999999996 - type: map_at_5 value: 54.933 - type: mrr_at_1 value: 49.7 - type: mrr_at_10 value: 55.67100000000001 - type: mrr_at_100 value: 56.254000000000005 - type: mrr_at_1000 value: 56.301 - type: mrr_at_3 value: 54.18300000000001 - type: mrr_at_5 value: 54.983000000000004 - type: ndcg_at_1 value: 49.6 - type: ndcg_at_10 value: 58.645 - type: ndcg_at_100 value: 61.789 - type: ndcg_at_1000 value: 63.219 - type: ndcg_at_3 value: 55.567 - type: ndcg_at_5 value: 57.008 - type: precision_at_1 value: 49.6 - type: precision_at_10 value: 6.819999999999999 - type: precision_at_100 value: 0.836 - type: precision_at_1000 value: 0.095 - type: precision_at_3 value: 19.900000000000002 - type: precision_at_5 value: 12.64 - type: recall_at_1 value: 49.6 - type: recall_at_10 value: 68.2 - type: recall_at_100 value: 83.6 - type: recall_at_1000 value: 95.3 - type: recall_at_3 value: 59.699999999999996 - type: recall_at_5 value: 63.2 task: type: Retrieval - dataset: config: default name: MTEB MultilingualSentiment revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a split: validation type: C-MTEB/MultilingualSentiment-classification metrics: - type: accuracy value: 74.45666666666666 - type: f1 value: 74.32582402190089 task: type: Classification - dataset: config: default name: MTEB Ocnli revision: 66e76a618a34d6d565d5538088562851e6daa7ec split: validation type: C-MTEB/OCNLI metrics: - type: cos_sim_accuracy value: 80.67135896047645 - type: cos_sim_ap value: 87.60421240712051 - type: cos_sim_f1 value: 82.1304131408661 - type: cos_sim_precision value: 77.68361581920904 - type: cos_sim_recall value: 87.11721224920802 - type: dot_accuracy value: 79.04710341093666 - type: dot_ap value: 85.6370059719336 - type: dot_f1 value: 80.763723150358 - type: dot_precision value: 73.69337979094077 - type: dot_recall value: 89.33474128827878 - type: euclidean_accuracy value: 81.05035192203573 - type: euclidean_ap value: 87.7880240053663 - type: euclidean_f1 value: 82.50244379276637 - type: euclidean_precision value: 76.7970882620564 - type: euclidean_recall value: 89.1235480464625 - type: manhattan_accuracy value: 80.61721710882512 - type: manhattan_ap value: 87.43568120591175 - type: manhattan_f1 value: 81.89526184538653 - type: manhattan_precision value: 77.5992438563327 - type: manhattan_recall value: 86.6948257655755 - type: max_accuracy value: 81.05035192203573 - type: max_ap value: 87.7880240053663 - type: max_f1 value: 82.50244379276637 task: type: PairClassification - dataset: config: default name: MTEB OnlineShopping revision: e610f2ebd179a8fda30ae534c3878750a96db120 split: test type: C-MTEB/OnlineShopping-classification metrics: - type: accuracy value: 93.5 - type: ap value: 91.31357903446782 - type: f1 value: 93.48088994006616 task: type: Classification - dataset: config: default name: MTEB PAWSX revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1 split: test type: C-MTEB/PAWSX metrics: - type: cos_sim_pearson value: 36.93293453538077 - type: cos_sim_spearman value: 42.45972506308574 - type: euclidean_pearson value: 42.34945133152159 - type: euclidean_spearman value: 42.331610303674644 - type: manhattan_pearson value: 42.31455070249498 - type: manhattan_spearman value: 42.19887982891834 task: type: STS - dataset: config: default name: MTEB QBQTC revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7 split: test type: C-MTEB/QBQTC metrics: - type: cos_sim_pearson value: 33.683290790043785 - type: cos_sim_spearman value: 35.149171171202994 - type: euclidean_pearson value: 32.33806561267862 - type: euclidean_spearman value: 34.483576387347966 - type: manhattan_pearson value: 32.47629754599608 - type: manhattan_spearman value: 34.66434471867615 task: type: STS - dataset: config: zh name: MTEB STS22 (zh) revision: eea2b4fe26a775864c896887d910b76a8098ad3f split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_pearson value: 66.46322760516104 - type: cos_sim_spearman value: 67.398478319726 - type: euclidean_pearson value: 64.7223480293625 - type: euclidean_spearman value: 66.83118568812951 - type: manhattan_pearson value: 64.88440039828305 - type: manhattan_spearman value: 66.80429458952257 task: type: STS - dataset: config: default name: MTEB STSB revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0 split: test type: C-MTEB/STSB metrics: - type: cos_sim_pearson value: 79.08991383232105 - type: cos_sim_spearman value: 79.39715677296854 - type: euclidean_pearson value: 78.63201279320496 - type: euclidean_spearman value: 79.40262660785731 - type: manhattan_pearson value: 78.98138363146906 - type: manhattan_spearman value: 79.79968413014194 task: type: STS - dataset: config: default name: MTEB T2Reranking revision: 76631901a18387f85eaa53e5450019b87ad58ef9 split: dev type: C-MTEB/T2Reranking metrics: - type: map value: 67.43289278789972 - type: mrr value: 77.53012460908535 task: type: Reranking - dataset: config: default name: MTEB T2Retrieval revision: 8731a845f1bf500a4f111cf1070785c793d10e64 split: dev type: C-MTEB/T2Retrieval metrics: - type: map_at_1 value: 27.733999999999998 - type: map_at_10 value: 78.24799999999999 - type: map_at_100 value: 81.765 - type: map_at_1000 value: 81.824 - type: map_at_3 value: 54.92 - type: map_at_5 value: 67.61399999999999 - type: mrr_at_1 value: 90.527 - type: mrr_at_10 value: 92.843 - type: mrr_at_100 value: 92.927 - type: mrr_at_1000 value: 92.93 - type: mrr_at_3 value: 92.45100000000001 - type: mrr_at_5 value: 92.693 - type: ndcg_at_1 value: 90.527 - type: ndcg_at_10 value: 85.466 - type: ndcg_at_100 value: 88.846 - type: ndcg_at_1000 value: 89.415 - type: ndcg_at_3 value: 86.768 - type: ndcg_at_5 value: 85.46000000000001 - type: precision_at_1 value: 90.527 - type: precision_at_10 value: 42.488 - type: precision_at_100 value: 5.024 - type: precision_at_1000 value: 0.516 - type: precision_at_3 value: 75.907 - type: precision_at_5 value: 63.727000000000004 - type: recall_at_1 value: 27.733999999999998 - type: recall_at_10 value: 84.346 - type: recall_at_100 value: 95.536 - type: recall_at_1000 value: 98.42999999999999 - type: recall_at_3 value: 56.455 - type: recall_at_5 value: 70.755 task: type: Retrieval - dataset: config: default name: MTEB TNews revision: 317f262bf1e6126357bbe89e875451e4b0938fe4 split: validation type: C-MTEB/TNews-classification metrics: - type: accuracy value: 49.952000000000005 - type: f1 value: 48.264617195258054 task: type: Classification - dataset: config: default name: MTEB ThuNewsClusteringP2P revision: 5798586b105c0434e4f0fe5e767abe619442cf93 split: test type: C-MTEB/ThuNewsClusteringP2P metrics: - type: v_measure value: 68.23769904483508 task: type: Clustering - dataset: config: default name: MTEB ThuNewsClusteringS2S revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d split: test type: C-MTEB/ThuNewsClusteringS2S metrics: - type: v_measure value: 62.50294403136556 task: type: Clustering - dataset: config: default name: MTEB VideoRetrieval revision: 58c2597a5943a2ba48f4668c3b90d796283c5639 split: dev type: C-MTEB/VideoRetrieval metrics: - type: map_at_1 value: 54.0 - type: map_at_10 value: 63.668 - type: map_at_100 value: 64.217 - type: map_at_1000 value: 64.23100000000001 - type: map_at_3 value: 61.7 - type: map_at_5 value: 62.870000000000005 - type: mrr_at_1 value: 54.0 - type: mrr_at_10 value: 63.668 - type: mrr_at_100 value: 64.217 - type: mrr_at_1000 value: 64.23100000000001 - type: mrr_at_3 value: 61.7 - type: mrr_at_5 value: 62.870000000000005 - type: ndcg_at_1 value: 54.0 - type: ndcg_at_10 value: 68.11399999999999 - type: ndcg_at_100 value: 70.723 - type: ndcg_at_1000 value: 71.123 - type: ndcg_at_3 value: 64.074 - type: ndcg_at_5 value: 66.178 - type: precision_at_1 value: 54.0 - type: precision_at_10 value: 8.200000000000001 - type: precision_at_100 value: 0.941 - type: precision_at_1000 value: 0.097 - type: precision_at_3 value: 23.633000000000003 - type: precision_at_5 value: 15.2 - type: recall_at_1 value: 54.0 - type: recall_at_10 value: 82.0 - type: recall_at_100 value: 94.1 - type: recall_at_1000 value: 97.3 - type: recall_at_3 value: 70.89999999999999 - type: recall_at_5 value: 76.0 task: type: Retrieval - dataset: config: default name: MTEB Waimai revision: 339287def212450dcaa9df8c22bf93e9980c7023 split: test type: C-MTEB/waimai-classification metrics: - type: accuracy value: 86.63000000000001 - type: ap value: 69.99457882599567 - type: f1 value: 85.07735617998541 task: type: Classification - dataset: config: default name: MTEB 8TagsClustering revision: None split: test type: PL-MTEB/8tags-clustering metrics: - type: v_measure value: 44.594104491193555 task: type: Clustering - dataset: config: default name: MTEB AllegroReviews revision: None split: test type: PL-MTEB/allegro-reviews metrics: - type: accuracy value: 63.97614314115309 - type: f1 value: 52.15634261679283 task: type: Classification - dataset: config: default name: MTEB ArguAna-PL revision: 63fc86750af76253e8c760fc9e534bbf24d260a2 split: test type: clarin-knext/arguana-pl metrics: - type: map_at_1 value: 32.646 - type: map_at_10 value: 47.963 - type: map_at_100 value: 48.789 - type: map_at_1000 value: 48.797000000000004 - type: map_at_3 value: 43.196 - type: map_at_5 value: 46.016 - type: mrr_at_1 value: 33.073 - type: mrr_at_10 value: 48.126000000000005 - type: mrr_at_100 value: 48.946 - type: mrr_at_1000 value: 48.953 - type: mrr_at_3 value: 43.374 - type: mrr_at_5 value: 46.147 - type: ndcg_at_1 value: 32.646 - type: ndcg_at_10 value: 56.481 - type: ndcg_at_100 value: 59.922 - type: ndcg_at_1000 value: 60.07 - type: ndcg_at_3 value: 46.675 - type: ndcg_at_5 value: 51.76500000000001 - type: precision_at_1 value: 32.646 - type: precision_at_10 value: 8.371 - type: precision_at_100 value: 0.9860000000000001 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 18.919 - type: precision_at_5 value: 13.825999999999999 - type: recall_at_1 value: 32.646 - type: recall_at_10 value: 83.71300000000001 - type: recall_at_100 value: 98.578 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 56.757000000000005 - type: recall_at_5 value: 69.132 task: type: Retrieval - dataset: config: default name: MTEB CBD revision: None split: test type: PL-MTEB/cbd metrics: - type: accuracy value: 68.56 - type: ap value: 23.310493680488513 - type: f1 value: 58.85369533105693 task: type: Classification - dataset: config: default name: MTEB CDSC-E revision: None split: test type: PL-MTEB/cdsce-pairclassification metrics: - type: cos_sim_accuracy value: 88.5 - type: cos_sim_ap value: 72.42140924378361 - type: cos_sim_f1 value: 66.0919540229885 - type: cos_sim_precision value: 72.78481012658227 - type: cos_sim_recall value: 60.526315789473685 - type: dot_accuracy value: 88.5 - type: dot_ap value: 72.42140924378361 - type: dot_f1 value: 66.0919540229885 - type: dot_precision value: 72.78481012658227 - type: dot_recall value: 60.526315789473685 - type: euclidean_accuracy value: 88.5 - type: euclidean_ap value: 72.42140924378361 - type: euclidean_f1 value: 66.0919540229885 - type: euclidean_precision value: 72.78481012658227 - type: euclidean_recall value: 60.526315789473685 - type: manhattan_accuracy value: 88.5 - type: manhattan_ap value: 72.49745515311696 - type: manhattan_f1 value: 66.0968660968661 - type: manhattan_precision value: 72.04968944099379 - type: manhattan_recall value: 61.05263157894737 - type: max_accuracy value: 88.5 - type: max_ap value: 72.49745515311696 - type: max_f1 value: 66.0968660968661 task: type: PairClassification - dataset: config: default name: MTEB CDSC-R revision: None split: test type: PL-MTEB/cdscr-sts metrics: - type: cos_sim_pearson value: 90.32269765590145 - type: cos_sim_spearman value: 89.73666311491672 - type: euclidean_pearson value: 88.2933868516544 - type: euclidean_spearman value: 89.73666311491672 - type: manhattan_pearson value: 88.33474590219448 - type: manhattan_spearman value: 89.8548364866583 task: type: STS - dataset: config: default name: MTEB DBPedia-PL revision: 76afe41d9af165cc40999fcaa92312b8b012064a split: test type: clarin-knext/dbpedia-pl metrics: - type: map_at_1 value: 7.632999999999999 - type: map_at_10 value: 16.426 - type: map_at_100 value: 22.651 - type: map_at_1000 value: 24.372 - type: map_at_3 value: 11.706 - type: map_at_5 value: 13.529 - type: mrr_at_1 value: 60.75000000000001 - type: mrr_at_10 value: 68.613 - type: mrr_at_100 value: 69.001 - type: mrr_at_1000 value: 69.021 - type: mrr_at_3 value: 67.0 - type: mrr_at_5 value: 67.925 - type: ndcg_at_1 value: 49.875 - type: ndcg_at_10 value: 36.978 - type: ndcg_at_100 value: 40.031 - type: ndcg_at_1000 value: 47.566 - type: ndcg_at_3 value: 41.148 - type: ndcg_at_5 value: 38.702 - type: precision_at_1 value: 60.75000000000001 - type: precision_at_10 value: 29.7 - type: precision_at_100 value: 9.278 - type: precision_at_1000 value: 2.099 - type: precision_at_3 value: 44.0 - type: precision_at_5 value: 37.6 - type: recall_at_1 value: 7.632999999999999 - type: recall_at_10 value: 22.040000000000003 - type: recall_at_100 value: 44.024 - type: recall_at_1000 value: 67.848 - type: recall_at_3 value: 13.093 - type: recall_at_5 value: 15.973 task: type: Retrieval - dataset: config: default name: MTEB FiQA-PL revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e split: test type: clarin-knext/fiqa-pl metrics: - type: map_at_1 value: 15.473 - type: map_at_10 value: 24.579 - type: map_at_100 value: 26.387 - type: map_at_1000 value: 26.57 - type: map_at_3 value: 21.278 - type: map_at_5 value: 23.179 - type: mrr_at_1 value: 30.709999999999997 - type: mrr_at_10 value: 38.994 - type: mrr_at_100 value: 39.993 - type: mrr_at_1000 value: 40.044999999999995 - type: mrr_at_3 value: 36.342999999999996 - type: mrr_at_5 value: 37.846999999999994 - type: ndcg_at_1 value: 30.709999999999997 - type: ndcg_at_10 value: 31.608999999999998 - type: ndcg_at_100 value: 38.807 - type: ndcg_at_1000 value: 42.208 - type: ndcg_at_3 value: 28.086 - type: ndcg_at_5 value: 29.323 - type: precision_at_1 value: 30.709999999999997 - type: precision_at_10 value: 8.688 - type: precision_at_100 value: 1.608 - type: precision_at_1000 value: 0.22100000000000003 - type: precision_at_3 value: 18.724 - type: precision_at_5 value: 13.950999999999999 - type: recall_at_1 value: 15.473 - type: recall_at_10 value: 38.361000000000004 - type: recall_at_100 value: 65.2 - type: recall_at_1000 value: 85.789 - type: recall_at_3 value: 25.401 - type: recall_at_5 value: 30.875999999999998 task: type: Retrieval - dataset: config: default name: MTEB HotpotQA-PL revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907 split: test type: clarin-knext/hotpotqa-pl metrics: - type: map_at_1 value: 38.096000000000004 - type: map_at_10 value: 51.44499999999999 - type: map_at_100 value: 52.325 - type: map_at_1000 value: 52.397000000000006 - type: map_at_3 value: 48.626999999999995 - type: map_at_5 value: 50.342 - type: mrr_at_1 value: 76.19200000000001 - type: mrr_at_10 value: 81.191 - type: mrr_at_100 value: 81.431 - type: mrr_at_1000 value: 81.443 - type: mrr_at_3 value: 80.30199999999999 - type: mrr_at_5 value: 80.85900000000001 - type: ndcg_at_1 value: 76.19200000000001 - type: ndcg_at_10 value: 60.9 - type: ndcg_at_100 value: 64.14699999999999 - type: ndcg_at_1000 value: 65.647 - type: ndcg_at_3 value: 56.818000000000005 - type: ndcg_at_5 value: 59.019999999999996 - type: precision_at_1 value: 76.19200000000001 - type: precision_at_10 value: 12.203 - type: precision_at_100 value: 1.478 - type: precision_at_1000 value: 0.168 - type: precision_at_3 value: 34.616 - type: precision_at_5 value: 22.515 - type: recall_at_1 value: 38.096000000000004 - type: recall_at_10 value: 61.013 - type: recall_at_100 value: 73.90299999999999 - type: recall_at_1000 value: 83.91 - type: recall_at_3 value: 51.92400000000001 - type: recall_at_5 value: 56.286 task: type: Retrieval - dataset: config: default name: MTEB MSMARCO-PL revision: 8634c07806d5cce3a6138e260e59b81760a0a640 split: test type: clarin-knext/msmarco-pl metrics: - type: map_at_1 value: 1.548 - type: map_at_10 value: 11.049000000000001 - type: map_at_100 value: 28.874 - type: map_at_1000 value: 34.931 - type: map_at_3 value: 4.162 - type: map_at_5 value: 6.396 - type: mrr_at_1 value: 90.69800000000001 - type: mrr_at_10 value: 92.093 - type: mrr_at_100 value: 92.345 - type: mrr_at_1000 value: 92.345 - type: mrr_at_3 value: 91.86 - type: mrr_at_5 value: 91.86 - type: ndcg_at_1 value: 74.031 - type: ndcg_at_10 value: 63.978 - type: ndcg_at_100 value: 53.101 - type: ndcg_at_1000 value: 60.675999999999995 - type: ndcg_at_3 value: 71.421 - type: ndcg_at_5 value: 68.098 - type: precision_at_1 value: 90.69800000000001 - type: precision_at_10 value: 71.86 - type: precision_at_100 value: 31.395 - type: precision_at_1000 value: 5.981 - type: precision_at_3 value: 84.49600000000001 - type: precision_at_5 value: 79.07 - type: recall_at_1 value: 1.548 - type: recall_at_10 value: 12.149000000000001 - type: recall_at_100 value: 40.794999999999995 - type: recall_at_1000 value: 67.974 - type: recall_at_3 value: 4.244 - type: recall_at_5 value: 6.608 task: type: Retrieval - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 73.55413584398119 - type: f1 value: 69.65610882318181 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 76.37188971082716 - type: f1 value: 75.64847309941361 task: type: Classification - dataset: config: default name: MTEB NFCorpus-PL revision: 9a6f9567fda928260afed2de480d79c98bf0bec0 split: test type: clarin-knext/nfcorpus-pl metrics: - type: map_at_1 value: 4.919 - type: map_at_10 value: 10.834000000000001 - type: map_at_100 value: 13.38 - type: map_at_1000 value: 14.581 - type: map_at_3 value: 8.198 - type: map_at_5 value: 9.428 - type: mrr_at_1 value: 41.176 - type: mrr_at_10 value: 50.083 - type: mrr_at_100 value: 50.559 - type: mrr_at_1000 value: 50.604000000000006 - type: mrr_at_3 value: 47.936 - type: mrr_at_5 value: 49.407000000000004 - type: ndcg_at_1 value: 39.628 - type: ndcg_at_10 value: 30.098000000000003 - type: ndcg_at_100 value: 27.061 - type: ndcg_at_1000 value: 35.94 - type: ndcg_at_3 value: 35.135 - type: ndcg_at_5 value: 33.335 - type: precision_at_1 value: 41.176 - type: precision_at_10 value: 22.259999999999998 - type: precision_at_100 value: 6.712 - type: precision_at_1000 value: 1.9060000000000001 - type: precision_at_3 value: 33.23 - type: precision_at_5 value: 29.04 - type: recall_at_1 value: 4.919 - type: recall_at_10 value: 14.196 - type: recall_at_100 value: 26.948 - type: recall_at_1000 value: 59.211000000000006 - type: recall_at_3 value: 9.44 - type: recall_at_5 value: 11.569 task: type: Retrieval - dataset: config: default name: MTEB NQ-PL revision: f171245712cf85dd4700b06bef18001578d0ca8d split: test type: clarin-knext/nq-pl metrics: - type: map_at_1 value: 25.35 - type: map_at_10 value: 37.884 - type: map_at_100 value: 38.955 - type: map_at_1000 value: 39.007999999999996 - type: map_at_3 value: 34.239999999999995 - type: map_at_5 value: 36.398 - type: mrr_at_1 value: 28.737000000000002 - type: mrr_at_10 value: 39.973 - type: mrr_at_100 value: 40.844 - type: mrr_at_1000 value: 40.885 - type: mrr_at_3 value: 36.901 - type: mrr_at_5 value: 38.721 - type: ndcg_at_1 value: 28.708 - type: ndcg_at_10 value: 44.204 - type: ndcg_at_100 value: 48.978 - type: ndcg_at_1000 value: 50.33 - type: ndcg_at_3 value: 37.36 - type: ndcg_at_5 value: 40.912 - type: precision_at_1 value: 28.708 - type: precision_at_10 value: 7.367 - type: precision_at_100 value: 1.0030000000000001 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 17.034 - type: precision_at_5 value: 12.293999999999999 - type: recall_at_1 value: 25.35 - type: recall_at_10 value: 61.411 - type: recall_at_100 value: 82.599 - type: recall_at_1000 value: 92.903 - type: recall_at_3 value: 43.728 - type: recall_at_5 value: 51.854 task: type: Retrieval - dataset: config: default name: MTEB PAC revision: None split: test type: laugustyniak/abusive-clauses-pl metrics: - type: accuracy value: 69.04141326382856 - type: ap value: 77.49422763833996 - type: f1 value: 66.73472657783407 task: type: Classification - dataset: config: default name: MTEB PPC revision: None split: test type: PL-MTEB/ppc-pairclassification metrics: - type: cos_sim_accuracy value: 81.0 - type: cos_sim_ap value: 91.47194213011349 - type: cos_sim_f1 value: 84.73767885532592 - type: cos_sim_precision value: 81.49847094801224 - type: cos_sim_recall value: 88.24503311258279 - type: dot_accuracy value: 81.0 - type: dot_ap value: 91.47194213011349 - type: dot_f1 value: 84.73767885532592 - type: dot_precision value: 81.49847094801224 - type: dot_recall value: 88.24503311258279 - type: euclidean_accuracy value: 81.0 - type: euclidean_ap value: 91.47194213011349 - type: euclidean_f1 value: 84.73767885532592 - type: euclidean_precision value: 81.49847094801224 - type: euclidean_recall value: 88.24503311258279 - type: manhattan_accuracy value: 81.0 - type: manhattan_ap value: 91.46464475050571 - type: manhattan_f1 value: 84.48687350835321 - type: manhattan_precision value: 81.31699846860643 - type: manhattan_recall value: 87.91390728476821 - type: max_accuracy value: 81.0 - type: max_ap value: 91.47194213011349 - type: max_f1 value: 84.73767885532592 task: type: PairClassification - dataset: config: default name: MTEB PSC revision: None split: test type: PL-MTEB/psc-pairclassification metrics: - type: cos_sim_accuracy value: 97.6808905380334 - type: cos_sim_ap value: 99.27948611836348 - type: cos_sim_f1 value: 96.15975422427034 - type: cos_sim_precision value: 96.90402476780186 - type: cos_sim_recall value: 95.42682926829268 - type: dot_accuracy value: 97.6808905380334 - type: dot_ap value: 99.2794861183635 - type: dot_f1 value: 96.15975422427034 - type: dot_precision value: 96.90402476780186 - type: dot_recall value: 95.42682926829268 - type: euclidean_accuracy value: 97.6808905380334 - type: euclidean_ap value: 99.2794861183635 - type: euclidean_f1 value: 96.15975422427034 - type: euclidean_precision value: 96.90402476780186 - type: euclidean_recall value: 95.42682926829268 - type: manhattan_accuracy value: 97.6808905380334 - type: manhattan_ap value: 99.28715055268721 - type: manhattan_f1 value: 96.14791987673343 - type: manhattan_precision value: 97.19626168224299 - type: manhattan_recall value: 95.1219512195122 - type: max_accuracy value: 97.6808905380334 - type: max_ap value: 99.28715055268721 - type: max_f1 value: 96.15975422427034 task: type: PairClassification - dataset: config: default name: MTEB PolEmo2.0-IN revision: None split: test type: PL-MTEB/polemo2_in metrics: - type: accuracy value: 86.16343490304708 - type: f1 value: 83.3442579486744 task: type: Classification - dataset: config: default name: MTEB PolEmo2.0-OUT revision: None split: test type: PL-MTEB/polemo2_out metrics: - type: accuracy value: 68.40080971659918 - type: f1 value: 53.13720751142237 task: type: Classification - dataset: config: default name: MTEB Quora-PL revision: 0be27e93455051e531182b85e85e425aba12e9d4 split: test type: clarin-knext/quora-pl metrics: - type: map_at_1 value: 63.322 - type: map_at_10 value: 76.847 - type: map_at_100 value: 77.616 - type: map_at_1000 value: 77.644 - type: map_at_3 value: 73.624 - type: map_at_5 value: 75.603 - type: mrr_at_1 value: 72.88 - type: mrr_at_10 value: 80.376 - type: mrr_at_100 value: 80.604 - type: mrr_at_1000 value: 80.61 - type: mrr_at_3 value: 78.92 - type: mrr_at_5 value: 79.869 - type: ndcg_at_1 value: 72.89999999999999 - type: ndcg_at_10 value: 81.43 - type: ndcg_at_100 value: 83.394 - type: ndcg_at_1000 value: 83.685 - type: ndcg_at_3 value: 77.62599999999999 - type: ndcg_at_5 value: 79.656 - type: precision_at_1 value: 72.89999999999999 - type: precision_at_10 value: 12.548 - type: precision_at_100 value: 1.4869999999999999 - type: precision_at_1000 value: 0.155 - type: precision_at_3 value: 34.027 - type: precision_at_5 value: 22.654 - type: recall_at_1 value: 63.322 - type: recall_at_10 value: 90.664 - type: recall_at_100 value: 97.974 - type: recall_at_1000 value: 99.636 - type: recall_at_3 value: 80.067 - type: recall_at_5 value: 85.526 task: type: Retrieval - dataset: config: default name: MTEB SCIDOCS-PL revision: 45452b03f05560207ef19149545f168e596c9337 split: test type: clarin-knext/scidocs-pl metrics: - type: map_at_1 value: 3.95 - type: map_at_10 value: 9.658999999999999 - type: map_at_100 value: 11.384 - type: map_at_1000 value: 11.677 - type: map_at_3 value: 7.055 - type: map_at_5 value: 8.244 - type: mrr_at_1 value: 19.5 - type: mrr_at_10 value: 28.777 - type: mrr_at_100 value: 29.936 - type: mrr_at_1000 value: 30.009999999999998 - type: mrr_at_3 value: 25.55 - type: mrr_at_5 value: 27.284999999999997 - type: ndcg_at_1 value: 19.5 - type: ndcg_at_10 value: 16.589000000000002 - type: ndcg_at_100 value: 23.879 - type: ndcg_at_1000 value: 29.279 - type: ndcg_at_3 value: 15.719 - type: ndcg_at_5 value: 13.572000000000001 - type: precision_at_1 value: 19.5 - type: precision_at_10 value: 8.62 - type: precision_at_100 value: 1.924 - type: precision_at_1000 value: 0.322 - type: precision_at_3 value: 14.6 - type: precision_at_5 value: 11.78 - type: recall_at_1 value: 3.95 - type: recall_at_10 value: 17.477999999999998 - type: recall_at_100 value: 38.99 - type: recall_at_1000 value: 65.417 - type: recall_at_3 value: 8.883000000000001 - type: recall_at_5 value: 11.933 task: type: Retrieval - dataset: config: default name: MTEB SICK-E-PL revision: None split: test type: PL-MTEB/sicke-pl-pairclassification metrics: - type: cos_sim_accuracy value: 83.48960456583775 - type: cos_sim_ap value: 76.31522115825375 - type: cos_sim_f1 value: 70.35573122529645 - type: cos_sim_precision value: 70.9934735315446 - type: cos_sim_recall value: 69.72934472934473 - type: dot_accuracy value: 83.48960456583775 - type: dot_ap value: 76.31522115825373 - type: dot_f1 value: 70.35573122529645 - type: dot_precision value: 70.9934735315446 - type: dot_recall value: 69.72934472934473 - type: euclidean_accuracy value: 83.48960456583775 - type: euclidean_ap value: 76.31522115825373 - type: euclidean_f1 value: 70.35573122529645 - type: euclidean_precision value: 70.9934735315446 - type: euclidean_recall value: 69.72934472934473 - type: manhattan_accuracy value: 83.46922136159804 - type: manhattan_ap value: 76.18474601388084 - type: manhattan_f1 value: 70.34779490856937 - type: manhattan_precision value: 70.83032490974729 - type: manhattan_recall value: 69.87179487179486 - type: max_accuracy value: 83.48960456583775 - type: max_ap value: 76.31522115825375 - type: max_f1 value: 70.35573122529645 task: type: PairClassification - dataset: config: default name: MTEB SICK-R-PL revision: None split: test type: PL-MTEB/sickr-pl-sts metrics: - type: cos_sim_pearson value: 77.95374883876302 - type: cos_sim_spearman value: 73.77630219171942 - type: euclidean_pearson value: 75.81927069594934 - type: euclidean_spearman value: 73.7763211303831 - type: manhattan_pearson value: 76.03126859057528 - type: manhattan_spearman value: 73.96528138013369 task: type: STS - dataset: config: pl name: MTEB STS22 (pl) revision: eea2b4fe26a775864c896887d910b76a8098ad3f split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_pearson value: 37.388282764841826 - type: cos_sim_spearman value: 40.83477184710897 - type: euclidean_pearson value: 26.754737044177805 - type: euclidean_spearman value: 40.83477184710897 - type: manhattan_pearson value: 26.760453110872458 - type: manhattan_spearman value: 41.034477441383856 task: type: STS - dataset: config: default name: MTEB SciFact-PL revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e split: test type: clarin-knext/scifact-pl metrics: - type: map_at_1 value: 49.15 - type: map_at_10 value: 61.690999999999995 - type: map_at_100 value: 62.348000000000006 - type: map_at_1000 value: 62.38 - type: map_at_3 value: 58.824 - type: map_at_5 value: 60.662000000000006 - type: mrr_at_1 value: 51.333 - type: mrr_at_10 value: 62.731 - type: mrr_at_100 value: 63.245 - type: mrr_at_1000 value: 63.275000000000006 - type: mrr_at_3 value: 60.667 - type: mrr_at_5 value: 61.93300000000001 - type: ndcg_at_1 value: 51.333 - type: ndcg_at_10 value: 67.168 - type: ndcg_at_100 value: 69.833 - type: ndcg_at_1000 value: 70.56700000000001 - type: ndcg_at_3 value: 62.40599999999999 - type: ndcg_at_5 value: 65.029 - type: precision_at_1 value: 51.333 - type: precision_at_10 value: 9.333 - type: precision_at_100 value: 1.0699999999999998 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 25.333 - type: precision_at_5 value: 17.067 - type: recall_at_1 value: 49.15 - type: recall_at_10 value: 82.533 - type: recall_at_100 value: 94.167 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 69.917 - type: recall_at_5 value: 76.356 task: type: Retrieval - dataset: config: default name: MTEB TRECCOVID-PL revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd split: test type: clarin-knext/trec-covid-pl metrics: - type: map_at_1 value: 0.261 - type: map_at_10 value: 2.1260000000000003 - type: map_at_100 value: 12.171999999999999 - type: map_at_1000 value: 26.884999999999998 - type: map_at_3 value: 0.695 - type: map_at_5 value: 1.134 - type: mrr_at_1 value: 96.0 - type: mrr_at_10 value: 96.952 - type: mrr_at_100 value: 96.952 - type: mrr_at_1000 value: 96.952 - type: mrr_at_3 value: 96.667 - type: mrr_at_5 value: 96.667 - type: ndcg_at_1 value: 92.0 - type: ndcg_at_10 value: 81.193 - type: ndcg_at_100 value: 61.129 - type: ndcg_at_1000 value: 51.157 - type: ndcg_at_3 value: 85.693 - type: ndcg_at_5 value: 84.129 - type: precision_at_1 value: 96.0 - type: precision_at_10 value: 85.39999999999999 - type: precision_at_100 value: 62.03999999999999 - type: precision_at_1000 value: 22.224 - type: precision_at_3 value: 88.0 - type: precision_at_5 value: 88.0 - type: recall_at_1 value: 0.261 - type: recall_at_10 value: 2.262 - type: recall_at_100 value: 14.981 - type: recall_at_1000 value: 46.837 - type: recall_at_3 value: 0.703 - type: recall_at_5 value: 1.172 task: type: Retrieval - dataset: config: default name: MTEB AlloProfClusteringP2P revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b split: test type: lyon-nlp/alloprof metrics: - type: v_measure value: 70.55290063940157 task: type: Clustering - dataset: config: default name: MTEB AlloProfClusteringS2S revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b split: test type: lyon-nlp/alloprof metrics: - type: v_measure value: 55.41500719337263 task: type: Clustering - dataset: config: default name: MTEB AlloprofReranking revision: 666fdacebe0291776e86f29345663dfaf80a0db9 split: test type: lyon-nlp/mteb-fr-reranking-alloprof-s2p metrics: - type: map value: 73.48697375332002 - type: mrr value: 75.01836585523822 task: type: Reranking - dataset: config: default name: MTEB AlloprofRetrieval revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b split: test type: lyon-nlp/alloprof metrics: - type: map_at_1 value: 38.454 - type: map_at_10 value: 51.605000000000004 - type: map_at_100 value: 52.653000000000006 - type: map_at_1000 value: 52.697 - type: map_at_3 value: 48.304 - type: map_at_5 value: 50.073 - type: mrr_at_1 value: 43.307 - type: mrr_at_10 value: 54.400000000000006 - type: mrr_at_100 value: 55.147999999999996 - type: mrr_at_1000 value: 55.174 - type: mrr_at_3 value: 51.77 - type: mrr_at_5 value: 53.166999999999994 - type: ndcg_at_1 value: 43.307 - type: ndcg_at_10 value: 57.891000000000005 - type: ndcg_at_100 value: 62.161 - type: ndcg_at_1000 value: 63.083 - type: ndcg_at_3 value: 51.851 - type: ndcg_at_5 value: 54.605000000000004 - type: precision_at_1 value: 43.307 - type: precision_at_10 value: 9.033 - type: precision_at_100 value: 1.172 - type: precision_at_1000 value: 0.127 - type: precision_at_3 value: 22.798 - type: precision_at_5 value: 15.492 - type: recall_at_1 value: 38.454 - type: recall_at_10 value: 74.166 - type: recall_at_100 value: 92.43599999999999 - type: recall_at_1000 value: 99.071 - type: recall_at_3 value: 58.087 - type: recall_at_5 value: 64.568 task: type: Retrieval - dataset: config: fr name: MTEB AmazonReviewsClassification (fr) revision: 1399c76144fd37290681b995c656ef9b2e06e26d split: test type: mteb/amazon_reviews_multi metrics: - type: accuracy value: 53.474 - type: f1 value: 50.38275392350236 task: type: Classification - dataset: config: default name: MTEB BSARDRetrieval revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59 split: test type: maastrichtlawtech/bsard metrics: - type: map_at_1 value: 2.252 - type: map_at_10 value: 4.661 - type: map_at_100 value: 5.271 - type: map_at_1000 value: 5.3629999999999995 - type: map_at_3 value: 3.604 - type: map_at_5 value: 4.3020000000000005 - type: mrr_at_1 value: 2.252 - type: mrr_at_10 value: 4.661 - type: mrr_at_100 value: 5.271 - type: mrr_at_1000 value: 5.3629999999999995 - type: mrr_at_3 value: 3.604 - type: mrr_at_5 value: 4.3020000000000005 - type: ndcg_at_1 value: 2.252 - type: ndcg_at_10 value: 6.3020000000000005 - type: ndcg_at_100 value: 10.342 - type: ndcg_at_1000 value: 13.475999999999999 - type: ndcg_at_3 value: 4.0649999999999995 - type: ndcg_at_5 value: 5.344 - type: precision_at_1 value: 2.252 - type: precision_at_10 value: 1.171 - type: precision_at_100 value: 0.333 - type: precision_at_1000 value: 0.059000000000000004 - type: precision_at_3 value: 1.802 - type: precision_at_5 value: 1.712 - type: recall_at_1 value: 2.252 - type: recall_at_10 value: 11.712 - type: recall_at_100 value: 33.333 - type: recall_at_1000 value: 59.458999999999996 - type: recall_at_3 value: 5.405 - type: recall_at_5 value: 8.559 task: type: Retrieval - dataset: config: default name: MTEB HALClusteringS2S revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915 split: test type: lyon-nlp/clustering-hal-s2s metrics: - type: v_measure value: 28.301882091023288 task: type: Clustering - dataset: config: default name: MTEB MLSUMClusteringP2P revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7 split: test type: mlsum metrics: - type: v_measure value: 45.26992995191701 task: type: Clustering - dataset: config: default name: MTEB MLSUMClusteringS2S revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7 split: test type: mlsum metrics: - type: v_measure value: 42.773174876871145 task: type: Clustering - dataset: config: fr name: MTEB MTOPDomainClassification (fr) revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf split: test type: mteb/mtop_domain metrics: - type: accuracy value: 93.47635452552458 - type: f1 value: 93.19922617577213 task: type: Classification - dataset: config: fr name: MTEB MTOPIntentClassification (fr) revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba split: test type: mteb/mtop_intent metrics: - type: accuracy value: 80.2317569683683 - type: f1 value: 56.18060418621901 task: type: Classification - dataset: config: fra name: MTEB MasakhaNEWSClassification (fra) revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 split: test type: masakhane/masakhanews metrics: - type: accuracy value: 85.18957345971565 - type: f1 value: 80.829981537394 task: type: Classification - dataset: config: fra name: MTEB MasakhaNEWSClusteringP2P (fra) revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 split: test type: masakhane/masakhanews metrics: - type: v_measure value: 71.04138999801822 task: type: Clustering - dataset: config: fra name: MTEB MasakhaNEWSClusteringS2S (fra) revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 split: test type: masakhane/masakhanews metrics: - type: v_measure value: 71.7056263158008 task: type: Clustering - dataset: config: fr name: MTEB MassiveIntentClassification (fr) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 76.65097511768661 - type: f1 value: 73.82441070598712 task: type: Classification - dataset: config: fr name: MTEB MassiveScenarioClassification (fr) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 79.09885675857431 - type: f1 value: 78.28407777434224 task: type: Classification - dataset: config: fr name: MTEB MintakaRetrieval (fr) revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e split: test type: jinaai/mintakaqa metrics: - type: map_at_1 value: 25.307000000000002 - type: map_at_10 value: 36.723 - type: map_at_100 value: 37.713 - type: map_at_1000 value: 37.769000000000005 - type: map_at_3 value: 33.77 - type: map_at_5 value: 35.463 - type: mrr_at_1 value: 25.307000000000002 - type: mrr_at_10 value: 36.723 - type: mrr_at_100 value: 37.713 - type: mrr_at_1000 value: 37.769000000000005 - type: mrr_at_3 value: 33.77 - type: mrr_at_5 value: 35.463 - type: ndcg_at_1 value: 25.307000000000002 - type: ndcg_at_10 value: 42.559999999999995 - type: ndcg_at_100 value: 47.457 - type: ndcg_at_1000 value: 49.162 - type: ndcg_at_3 value: 36.461 - type: ndcg_at_5 value: 39.504 - type: precision_at_1 value: 25.307000000000002 - type: precision_at_10 value: 6.106 - type: precision_at_100 value: 0.8420000000000001 - type: precision_at_1000 value: 0.098 - type: precision_at_3 value: 14.741999999999999 - type: precision_at_5 value: 10.319 - type: recall_at_1 value: 25.307000000000002 - type: recall_at_10 value: 61.056999999999995 - type: recall_at_100 value: 84.152 - type: recall_at_1000 value: 98.03399999999999 - type: recall_at_3 value: 44.226 - type: recall_at_5 value: 51.597 task: type: Retrieval - dataset: config: fr name: MTEB OpusparcusPC (fr) revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a split: test type: GEM/opusparcus metrics: - type: cos_sim_accuracy value: 99.90069513406156 - type: cos_sim_ap value: 100.0 - type: cos_sim_f1 value: 99.95032290114257 - type: cos_sim_precision value: 100.0 - type: cos_sim_recall value: 99.90069513406156 - type: dot_accuracy value: 99.90069513406156 - type: dot_ap value: 100.0 - type: dot_f1 value: 99.95032290114257 - type: dot_precision value: 100.0 - type: dot_recall value: 99.90069513406156 - type: euclidean_accuracy value: 99.90069513406156 - type: euclidean_ap value: 100.0 - type: euclidean_f1 value: 99.95032290114257 - type: euclidean_precision value: 100.0 - type: euclidean_recall value: 99.90069513406156 - type: manhattan_accuracy value: 99.90069513406156 - type: manhattan_ap value: 100.0 - type: manhattan_f1 value: 99.95032290114257 - type: manhattan_precision value: 100.0 - type: manhattan_recall value: 99.90069513406156 - type: max_accuracy value: 99.90069513406156 - type: max_ap value: 100.0 - type: max_f1 value: 99.95032290114257 task: type: PairClassification - dataset: config: fr name: MTEB PawsX (fr) revision: 8a04d940a42cd40658986fdd8e3da561533a3646 split: test type: paws-x metrics: - type: cos_sim_accuracy value: 70.8 - type: cos_sim_ap value: 73.7671529695957 - type: cos_sim_f1 value: 68.80964339527875 - type: cos_sim_precision value: 62.95955882352941 - type: cos_sim_recall value: 75.85825027685493 - type: dot_accuracy value: 70.8 - type: dot_ap value: 73.78345265366947 - type: dot_f1 value: 68.80964339527875 - type: dot_precision value: 62.95955882352941 - type: dot_recall value: 75.85825027685493 - type: euclidean_accuracy value: 70.8 - type: euclidean_ap value: 73.7671529695957 - type: euclidean_f1 value: 68.80964339527875 - type: euclidean_precision value: 62.95955882352941 - type: euclidean_recall value: 75.85825027685493 - type: manhattan_accuracy value: 70.75 - type: manhattan_ap value: 73.78996383615953 - type: manhattan_f1 value: 68.79432624113475 - type: manhattan_precision value: 63.39869281045751 - type: manhattan_recall value: 75.1937984496124 - type: max_accuracy value: 70.8 - type: max_ap value: 73.78996383615953 - type: max_f1 value: 68.80964339527875 task: type: PairClassification - dataset: config: default name: MTEB SICKFr revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a split: test type: Lajavaness/SICK-fr metrics: - type: cos_sim_pearson value: 84.03253762760392 - type: cos_sim_spearman value: 79.68280105762004 - type: euclidean_pearson value: 80.98265050044444 - type: euclidean_spearman value: 79.68233242682867 - type: manhattan_pearson value: 80.9678911810704 - type: manhattan_spearman value: 79.70264097683109 task: type: STS - dataset: config: fr name: MTEB STS22 (fr) revision: eea2b4fe26a775864c896887d910b76a8098ad3f split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_pearson value: 80.56896987572884 - type: cos_sim_spearman value: 81.84352499523287 - type: euclidean_pearson value: 80.40831759421305 - type: euclidean_spearman value: 81.84352499523287 - type: manhattan_pearson value: 80.74333857561238 - type: manhattan_spearman value: 82.41503246733892 task: type: STS - dataset: config: fr name: MTEB STSBenchmarkMultilingualSTS (fr) revision: 93d57ef91790589e3ce9c365164337a8a78b7632 split: test type: stsb_multi_mt metrics: - type: cos_sim_pearson value: 82.71826762276979 - type: cos_sim_spearman value: 82.25433354916042 - type: euclidean_pearson value: 81.87115571724316 - type: euclidean_spearman value: 82.25322342890107 - type: manhattan_pearson value: 82.11174867527224 - type: manhattan_spearman value: 82.55905365203084 task: type: STS - dataset: config: default name: MTEB SummEvalFr revision: b385812de6a9577b6f4d0f88c6a6e35395a94054 split: test type: lyon-nlp/summarization-summeval-fr-p2p metrics: - type: cos_sim_pearson value: 30.659441623392887 - type: cos_sim_spearman value: 30.501134097353315 - type: dot_pearson value: 30.659444768851056 - type: dot_spearman value: 30.501134097353315 task: type: Summarization - dataset: config: default name: MTEB SyntecReranking revision: b205c5084a0934ce8af14338bf03feb19499c84d split: test type: lyon-nlp/mteb-fr-reranking-syntec-s2p metrics: - type: map value: 94.03333333333333 - type: mrr value: 94.03333333333333 task: type: Reranking - dataset: config: default name: MTEB SyntecRetrieval revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff split: test type: lyon-nlp/mteb-fr-retrieval-syntec-s2p metrics: - type: map_at_1 value: 79.0 - type: map_at_10 value: 87.61 - type: map_at_100 value: 87.655 - type: map_at_1000 value: 87.655 - type: map_at_3 value: 87.167 - type: map_at_5 value: 87.36699999999999 - type: mrr_at_1 value: 79.0 - type: mrr_at_10 value: 87.61 - type: mrr_at_100 value: 87.655 - type: mrr_at_1000 value: 87.655 - type: mrr_at_3 value: 87.167 - type: mrr_at_5 value: 87.36699999999999 - type: ndcg_at_1 value: 79.0 - type: ndcg_at_10 value: 90.473 - type: ndcg_at_100 value: 90.694 - type: ndcg_at_1000 value: 90.694 - type: ndcg_at_3 value: 89.464 - type: ndcg_at_5 value: 89.851 - type: precision_at_1 value: 79.0 - type: precision_at_10 value: 9.9 - type: precision_at_100 value: 1.0 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 32.0 - type: precision_at_5 value: 19.400000000000002 - type: recall_at_1 value: 79.0 - type: recall_at_10 value: 99.0 - type: recall_at_100 value: 100.0 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 96.0 - type: recall_at_5 value: 97.0 task: type: Retrieval - dataset: config: fr name: MTEB XPQARetrieval (fr) revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f split: test type: jinaai/xpqa metrics: - type: map_at_1 value: 39.395 - type: map_at_10 value: 59.123999999999995 - type: map_at_100 value: 60.704 - type: map_at_1000 value: 60.760000000000005 - type: map_at_3 value: 53.187 - type: map_at_5 value: 56.863 - type: mrr_at_1 value: 62.083 - type: mrr_at_10 value: 68.87299999999999 - type: mrr_at_100 value: 69.46900000000001 - type: mrr_at_1000 value: 69.48299999999999 - type: mrr_at_3 value: 66.8 - type: mrr_at_5 value: 67.928 - type: ndcg_at_1 value: 62.083 - type: ndcg_at_10 value: 65.583 - type: ndcg_at_100 value: 70.918 - type: ndcg_at_1000 value: 71.72800000000001 - type: ndcg_at_3 value: 60.428000000000004 - type: ndcg_at_5 value: 61.853 - type: precision_at_1 value: 62.083 - type: precision_at_10 value: 15.033 - type: precision_at_100 value: 1.9529999999999998 - type: precision_at_1000 value: 0.207 - type: precision_at_3 value: 36.315 - type: precision_at_5 value: 25.955000000000002 - type: recall_at_1 value: 39.395 - type: recall_at_10 value: 74.332 - type: recall_at_100 value: 94.729 - type: recall_at_1000 value: 99.75500000000001 - type: recall_at_3 value: 57.679 - type: recall_at_5 value: 65.036 task: type: Retrieval --- ## gte-Qwen2-1.5B-instruct **gte-Qwen2-1.5B-instruct** is the latest model in the gte (General Text Embedding) model family. The model is built on [Qwen2-1.5B](https://huggingface.co/Qwen/Qwen2-1.5B) LLM model and use the same training data and strategies as the [gte-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) model. The model incorporates several key advancements: - Integration of bidirectional attention mechanisms, enriching its contextual understanding. - Instruction tuning, applied solely on the query side for streamlined efficiency - Comprehensive training across a vast, multilingual text corpus spanning diverse domains and scenarios. This training leverages both weakly supervised and supervised data, ensuring the model's applicability across numerous languages and a wide array of downstream tasks. ## Model Information - Model Size: 1.5B - Embedding Dimension: 1536 - Max Input Tokens: 32k ## Requirements ``` transformers>=4.39.2 flash_attn>=2.5.6 ``` ## Usage ### Sentence Transformers ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer("Alibaba-NLP/gte-Qwen2-1.5B-instruct", trust_remote_code=True) # In case you want to reduce the maximum length: model.max_seq_length = 8192 queries = [ "how much protein should a female eat", "summit define", ] documents = [ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.", ] query_embeddings = model.encode(queries, prompt_name="query") document_embeddings = model.encode(documents) scores = (query_embeddings @ document_embeddings.T) * 100 print(scores.tolist()) ``` Observe the [config_sentence_transformers.json](config_sentence_transformers.json) to see all pre-built prompt names. Otherwise, you can use `model.encode(queries, prompt="Instruct: ...\nQuery: "` to use a custom prompt of your choice. ### Transformers ```python import torch import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def last_token_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0]) if left_padding: return last_hidden_states[:, -1] else: sequence_lengths = attention_mask.sum(dim=1) - 1 batch_size = last_hidden_states.shape[0] return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths] def get_detailed_instruct(task_description: str, query: str) -> str: return f'Instruct: {task_description}\nQuery: {query}' # Each query must come with a one-sentence instruction that describes the task task = 'Given a web search query, retrieve relevant passages that answer the query' queries = [ get_detailed_instruct(task, 'how much protein should a female eat'), get_detailed_instruct(task, 'summit define') ] # No need to add instruction for retrieval documents documents = [ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments." ] input_texts = queries + documents tokenizer = AutoTokenizer.from_pretrained('Alibaba-NLP/gte-Qwen2-1.5B-instruct', trust_remote_code=True) model = AutoModel.from_pretrained('Alibaba-NLP/gte-Qwen2-1.5B-instruct', trust_remote_code=True) max_length = 8192 # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=max_length, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ### infinity_emb Usage via [infinity, MIT Licensed](https://github.com/michaelfeil/infinity). ```bash docker run \ --gpus "0" -p "7997":"7997" \ michaelf34/infinity:0.0.68-trt-onnx \ v2 --model-id Alibaba-NLP/gte-Qwen2-1.5B-instruct --revision "refs/pr/20" --dtype bfloat16 --batch-size 16 --device cuda --engine torch --port 7997 --no-bettertransformer ``` ## Evaluation ### MTEB & C-MTEB You can use the [scripts/eval_mteb.py](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct/blob/main/scripts/eval_mteb.py) to reproduce the following result of **gte-Qwen2-1.5B-instruct** on MTEB(English)/C-MTEB(Chinese): | Model Name | MTEB(56) | C-MTEB(35) | MTEB-fr(26) | MTEB-pl(26) | |:----:|:---------:|:----------:|:----------:|:----------:| | [bge-base-en-1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 64.23 | - | - | - | | [bge-large-en-1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 63.55 | - | - | - | | [gte-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 65.39 | - | - | - | | [gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 64.11 | - | - | - | | [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) | 64.68 | - | - | - | | [acge_text_embedding](https://huggingface.co/aspire/acge_text_embedding) | - | 69.07 | - | - | | [stella-mrl-large-zh-v3.5-1792d](https://huggingface.co/infgrad/stella-mrl-large-zh-v3.5-1792d) | - | 68.55 | - | - | | [gte-large-zh](https://huggingface.co/thenlper/gte-large-zh) | - | 66.72 | - | - | | [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 59.45 | 56.21 | - | - | | [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 61.50 | 58.81 | - | - | | [e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct) | 66.63 | 60.81 | - | - | | [gte-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) | 67.34 | 69.52 | - | - | | [NV-Embed-v1](https://huggingface.co/nvidia/NV-Embed-v1) | 69.32 | - | - | - | | [**gte-Qwen2-7B-instruct**](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) | **70.24** | **72.05** | **68.25** | **67.86** | | [**gte-Qwen2-1.5B-instruct**](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) | **67.16** | **67.65** | **66.60** | **64.04** | ### GTE Models The gte series models have consistently released two types of models: encoder-only models (based on the BERT architecture) and decode-only models (based on the LLM architecture). | Models | Language | Max Sequence Length | Dimension | Model Size (Memory Usage, fp32) | |:-------------------------------------------------------------------------------------:|:--------:|:-----: |:---------:|:-------------------------------:| | [GTE-large-zh](https://huggingface.co/thenlper/gte-large-zh) | Chinese | 512 | 1024 | 1.25GB | | [GTE-base-zh](https://huggingface.co/thenlper/gte-base-zh) | Chinese | 512 | 512 | 0.41GB | | [GTE-small-zh](https://huggingface.co/thenlper/gte-small-zh) | Chinese | 512 | 512 | 0.12GB | | [GTE-large](https://huggingface.co/thenlper/gte-large) | English | 512 | 1024 | 1.25GB | | [GTE-base](https://huggingface.co/thenlper/gte-base) | English | 512 | 512 | 0.21GB | | [GTE-small](https://huggingface.co/thenlper/gte-small) | English | 512 | 384 | 0.10GB | | [GTE-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | English | 8192 | 1024 | 1.74GB | | [GTE-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) | English | 8192 | 768 | 0.51GB | | [GTE-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) | Multilingual | 32000 | 4096 | 26.45GB | | [GTE-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) | Multilingual | 32000 | 3584 | 26.45GB | | [GTE-Qwen2-1.5B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) | Multilingual | 32000 | 1536 | 6.62GB | ## Cloud API Services In addition to the open-source [GTE](https://huggingface.co/collections/Alibaba-NLP/gte-models-6680f0b13f885cb431e6d469) series models, GTE series models are also available as commercial API services on Alibaba Cloud. - [Embedding Models](https://help.aliyun.com/zh/model-studio/developer-reference/general-text-embedding/): Three versions of the text embedding models are available: text-embedding-v1/v2/v3, with v3 being the latest API service. - [ReRank Models](https://help.aliyun.com/zh/model-studio/developer-reference/general-text-sorting-model/): The gte-rerank model service is available. Note that the models behind the commercial APIs are not entirely identical to the open-source models. ## Community support ### Fine-tuning GTE models can be fine-tuned with a third party framework SWIFT. ```shell pip install ms-swift -U ``` ```shell # check: https://swift.readthedocs.io/en/latest/BestPractices/Embedding.html nproc_per_node=8 NPROC_PER_NODE=$nproc_per_node \ USE_HF=1 \ swift sft \ --model Alibaba-NLP/gte-Qwen2-1.5B-instruct \ --train_type lora \ --dataset 'sentence-transformers/stsb' \ --torch_dtype bfloat16 \ --num_train_epochs 10 \ --per_device_train_batch_size 2 \ --per_device_eval_batch_size 1 \ --gradient_accumulation_steps $(expr 64 / $nproc_per_node) \ --eval_steps 100 \ --save_steps 100 \ --eval_strategy steps \ --use_chat_template false \ --save_total_limit 5 \ --logging_steps 5 \ --output_dir output \ --warmup_ratio 0.05 \ --learning_rate 5e-6 \ --deepspeed zero3 \ --dataloader_num_workers 4 \ --task_type embedding \ --loss_type cosine_similarity \ --dataloader_drop_last true ``` ## Citation If you find our paper or models helpful, please consider cite: ``` @article{li2023towards, title={Towards general text embeddings with multi-stage contrastive learning}, author={Li, Zehan and Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan}, journal={arXiv preprint arXiv:2308.03281}, year={2023} } ```
Viral-filtrada-de-Video-y-Foto-alana/Video.ltube.alana.flores.foto.polemica.alana.flores.trending.viral.Full.Video
Viral-filtrada-de-Video-y-Foto-alana
2025-06-03T17:33:38Z
0
0
null
[ "region:us" ]
null
2025-06-03T17:33:30Z
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a> <a href="https://sdu.sk/uLf" rel="nofollow">โ–บโœ… ๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ (๐—ฆ๐—ถ๐—ด๐—ป ๐—จ๐—ฝ ๐˜๐—ผ ๐™๐™ช๐™ก๐™ก ๐—ช๐—ฎ๐˜๐—ฐ๐—ต ๐™‘๐™ž๐™™๐™š๐™คโค๏ธโค๏ธ)</a> <a href="https://sdu.sk/uLf" rel="nofollow">๐Ÿ”ด โžคโ–บโœ…๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐ฅ๐ข๐ง๐ค)</a>
ViRAL-18-Katrina-Lim-Viral-Kiffy-Videos/FULL.VIDEO.LINK.Katrina.Lim.Viral.Video.Leaks.Official
ViRAL-18-Katrina-Lim-Viral-Kiffy-Videos
2025-06-03T17:32:27Z
0
0
null
[ "region:us" ]
null
2025-06-03T17:32:21Z
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a> <a href="https://sdu.sk/uLf" rel="nofollow">โ–บโœ… ๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ (๐—ฆ๐—ถ๐—ด๐—ป ๐—จ๐—ฝ ๐˜๐—ผ ๐™๐™ช๐™ก๐™ก ๐—ช๐—ฎ๐˜๐—ฐ๐—ต ๐™‘๐™ž๐™™๐™š๐™คโค๏ธโค๏ธ)</a> <a href="https://sdu.sk/uLf" rel="nofollow">๐Ÿ”ด โžคโ–บโœ…๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐ฅ๐ข๐ง๐ค)</a>
MilkyWay0932/test3
MilkyWay0932
2025-06-03T17:32:16Z
4
0
null
[ "region:us" ]
null
2025-05-13T12:03:12Z
Disclaimer: I am not the author/creator of these models. Full credit and all rights belong to the respective original creators. They are archived here solely for personal reference/backup