modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-13 06:27:15
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 425
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-13 06:26:56
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
bobae/openai_finetuned_detector | bobae | "2024-06-01T02:47:28Z" | 115 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-01T02:31:06Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MYX4567/distilbert-base-uncased-finetuned-squad | MYX4567 | "2021-07-28T08:07:15Z" | 42 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-03-02T23:29:04Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model_index:
- name: distilbert-base-uncased-finetuned-squad
results:
- task:
name: Question Answering
type: question-answering
dataset:
name: squad
type: squad
args: plain_text
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1520
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2177 | 1.0 | 5533 | 1.1565 |
| 0.9472 | 2.0 | 11066 | 1.1174 |
| 0.7634 | 3.0 | 16599 | 1.1520 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
research-backup/roberta-large-semeval2012-mask-prompt-a-loob | research-backup | "2022-09-19T18:59:55Z" | 107 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2022-08-26T06:59:38Z" | ---
datasets:
- relbert/semeval2012_relational_similarity
model-index:
- name: relbert/roberta-large-semeval2012-mask-prompt-a-loob
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.9060317460317461
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6550802139037433
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.655786350148368
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.8043357420789328
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.95
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.631578947368421
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6412037037037037
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9245140876902215
- name: F1 (macro)
type: f1_macro
value: 0.9208294548760101
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8814553990610329
- name: F1 (macro)
type: f1_macro
value: 0.7355497663400952
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7128927410617552
- name: F1 (macro)
type: f1_macro
value: 0.7065924774146382
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9646657856298254
- name: F1 (macro)
type: f1_macro
value: 0.8945677578632619
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9081792541523034
- name: F1 (macro)
type: f1_macro
value: 0.906414518159255
---
# relbert/roberta-large-semeval2012-mask-prompt-a-loob
RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on
[relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-a-loob/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.6550802139037433
- Accuracy on SAT: 0.655786350148368
- Accuracy on BATS: 0.8043357420789328
- Accuracy on U2: 0.631578947368421
- Accuracy on U4: 0.6412037037037037
- Accuracy on Google: 0.95
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-a-loob/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9245140876902215
- Micro F1 score on CogALexV: 0.8814553990610329
- Micro F1 score on EVALution: 0.7128927410617552
- Micro F1 score on K&H+N: 0.9646657856298254
- Micro F1 score on ROOT09: 0.9081792541523034
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-a-loob/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.9060317460317461
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/roberta-large-semeval2012-mask-prompt-a-loob")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-large
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity
- template_mode: manual
- template: Today, I finally discovered the relation between <subj> and <obj> : <subj> is the <mask> of <obj>
- loss_function: info_loob
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 21
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 0
- exclude_relation: None
- n_sample: 640
- gradient_accumulation: 8
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-a-loob/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
SummerSigh/Pythia410m-Instruct-SFT | SummerSigh | "2023-03-19T18:38:26Z" | 19 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-03-04T20:17:34Z" |
## Usage:
```
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("SummerSigh/Pythia410m-Instruct-SFT")
generator = pipeline('text-generation', model = 'SummerSigh/Pythia410m-Instruct-SFT')
inpopo = input("Text here: ")
text = generator("<user>" + inpopo + "<user><kinrel>" , max_length = 200, do_sample=True, top_p = 0.7, temperature = 0.5, repetition_penalty = 1.2, pad_token_id=tokenizer.eos_token_id)
generated_text = text[0]["generated_text"]
parts = generated_text.split("<kinrel>")
cropped_text = "<kinrel>".join(parts[:2]) + "<kinrel>"
print(cropped_text)
``` |
thenlpresearcher/gemma_sequence_classification | thenlpresearcher | "2025-02-06T09:52:51Z" | 12 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2-9b",
"base_model:adapter:google/gemma-2-9b",
"license:gemma",
"region:us"
] | null | "2025-02-06T09:52:23Z" | ---
library_name: peft
license: gemma
base_model: google/gemma-2-9b
tags:
- generated_from_trainer
model-index:
- name: gemma_sequence_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma_sequence_classification
This model is a fine-tuned version of [google/gemma-2-9b](https://huggingface.co/google/gemma-2-9b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2145
- Pearson: 0.9718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 1.0 | 206 | 0.2220 | 0.9313 |
| No log | 2.0 | 412 | 0.2142 | 0.9564 |
| 0.3426 | 3.0 | 618 | 0.1653 | 0.9716 |
| 0.3426 | 4.0 | 824 | 0.2545 | 0.9750 |
| 0.0318 | 5.0 | 1030 | 0.2145 | 0.9718 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.45.2
- Pytorch 2.4.0a0+f70bd71a48.nv24.06
- Datasets 3.2.0
- Tokenizers 0.20.3 |
nabilrakaiza/image_classification | nabilrakaiza | "2024-06-05T16:27:37Z" | 216 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-05T14:53:24Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: image_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.7706
- eval_accuracy: 0.2875
- eval_runtime: 166.3739
- eval_samples_per_second: 0.962
- eval_steps_per_second: 0.24
- epoch: 0.0063
- step: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
dor88/ppo-LunarLander-v2 | dor88 | "2022-12-10T15:13:13Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-12-10T14:41:53Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.02 +/- 16.27
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mlfoundations-dev/oh_v3-1_only_gpteacher | mlfoundations-dev | "2024-11-22T19:44:20Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-22T19:31:38Z" | ---
library_name: transformers
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: oh_v3-1_only_gpteacher
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# oh_v3-1_only_gpteacher
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the mlfoundations-dev/oh_v3-1_only_gpteacher dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 1738
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.7273 | 2 | 1.1344 |
| No log | 1.8182 | 5 | 1.0645 |
| No log | 2.1818 | 6 | 1.0549 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.3
|
nttx/4e2a8b20-6a00-4df4-98ed-d5b5ad98f43d | nttx | "2025-01-13T13:58:14Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"base_model:adapter:berkeley-nest/Starling-LM-7B-alpha",
"license:apache-2.0",
"region:us"
] | null | "2025-01-13T13:44:53Z" | ---
library_name: peft
license: apache-2.0
base_model: berkeley-nest/Starling-LM-7B-alpha
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4e2a8b20-6a00-4df4-98ed-d5b5ad98f43d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: berkeley-nest/Starling-LM-7B-alpha
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d9e93edc97a55b65_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d9e93edc97a55b65_train_data.json
type:
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: true
hub_model_id: nttx/4e2a8b20-6a00-4df4-98ed-d5b5ad98f43d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 400
micro_batch_size: 2
mlflow_experiment_name: /tmp/d9e93edc97a55b65_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c6ed2206-df1b-45ec-bf96-3249cc68b770
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c6ed2206-df1b-45ec-bf96-3249cc68b770
warmup_steps: 30
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4e2a8b20-6a00-4df4-98ed-d5b5ad98f43d
This model is a fine-tuned version of [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1803
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0018 | 1 | 2.0589 |
| 5.3934 | 0.1788 | 100 | 1.4118 |
| 6.7862 | 0.3576 | 200 | 1.2747 |
| 4.9746 | 0.5364 | 300 | 1.2248 |
| 4.2009 | 0.7152 | 400 | 1.1803 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
haturusinghe/1st_0.6107080460589771_05_02-0949_xlm-roberta-base_mrp_2e-05_8_937.ckpt | haturusinghe | "2024-02-05T09:51:14Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-02-05T09:49:28Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jxm/gte-32-noise-0.001 | jxm | "2024-09-09T22:15:32Z" | 54 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-09-09T22:15:19Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Hamzaabbas77/bloom-1b7-good-reviews | Hamzaabbas77 | "2023-08-17T08:17:08Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-17T08:17:06Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
softwareweaver/ColossusProject-xl-Olive-Onnx | softwareweaver | "2023-12-02T13:40:08Z" | 1 | 0 | diffusers | [
"diffusers",
"onnx",
"text-to-image",
"en",
"license:openrail",
"diffusers:ORTStableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2023-11-26T05:09:12Z" | ---
license: openrail
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
---
Olive Optimized DirectML Onnx model for https://civitai.com/models/147720
This model is being used by Fusion Quill - a Windows app that runs Stable Diffusion models locally. https://FusionQuill.AI |
LarryAIDraw/terakomari_gandesblood | LarryAIDraw | "2023-11-15T02:35:27Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-11-15T02:27:18Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/132341/terakomari-gandesblood-the-vexations-of-a-shut-in-vampire-princess-or-or |
Legalaz/15_llamboch2_03_08 | Legalaz | "2025-01-31T08:12:06Z" | 11 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-31T08:09:55Z" | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# top
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* /root/top2
* /root/top1
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /root/top2
parameters:
weight: 0.8109
- model: /root/top1
parameters:
weight: 0.0628
merge_method: linear
dtype: bfloat16
```
|
Holarissun/zephyr3b_aisft_gsm8k_rand_alpha0.9901-subset7000 | Holarissun | "2024-03-12T22:25:06Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:stabilityai/stablelm-zephyr-3b",
"base_model:adapter:stabilityai/stablelm-zephyr-3b",
"license:other",
"region:us"
] | null | "2024-03-12T22:25:02Z" | ---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: stabilityai/stablelm-zephyr-3b
model-index:
- name: zephyr3b_aisft_gsm8k_rand_alpha0.9901-subset7000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr3b_aisft_gsm8k_rand_alpha0.9901-subset7000
This model is a fine-tuned version of [stabilityai/stablelm-zephyr-3b](https://huggingface.co/stabilityai/stablelm-zephyr-3b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
dbmdz/bert-base-german-cased | dbmdz | "2023-09-06T22:19:38Z" | 45,677 | 19 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"doi:10.57967/hf/4377",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | ---
language: de
license: mit
---
# 🤗 + 📚 dbmdz German BERT models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources another German BERT models 🎉
# German BERT
## Stats
In addition to the recently released [German BERT](https://deepset.ai/german-bert)
model by [deepset](https://deepset.ai/) we provide another German-language model.
The source data for the model consists of a recent Wikipedia dump, EU Bookshop corpus,
Open Subtitles, CommonCrawl, ParaCrawl and News Crawl. This results in a dataset with
a size of 16GB and 2,350,234,427 tokens.
For sentence splitting, we use [spacy](https://spacy.io/). Our preprocessing steps
(sentence piece model for vocab generation) follow those used for training
[SciBERT](https://github.com/allenai/scibert). The model is trained with an initial
sequence length of 512 subwords and was performed for 1.5M steps.
This release includes both cased and uncased models.
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
| Model | Downloads
| -------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `bert-base-german-dbmdz-cased` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-config.json) • [`pytorch_model.bin`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-pytorch_model.bin) • [`vocab.txt`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-vocab.txt)
| `bert-base-german-dbmdz-uncased` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-config.json) • [`pytorch_model.bin`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-pytorch_model.bin) • [`vocab.txt`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-vocab.txt)
## Usage
With Transformers >= 2.3 our German BERT models can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-cased")
model = AutoModel.from_pretrained("dbmdz/bert-base-german-cased")
```
## Results
For results on downstream tasks like NER or PoS tagging, please refer to
[this repository](https://github.com/stefan-it/fine-tuned-berts-seq).
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
JuniperChinenye/d2 | JuniperChinenye | "2024-11-16T23:09:30Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-15T10:51:12Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
huggingtweets/zaidalyafeai | huggingtweets | "2022-06-09T13:03:12Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-06-09T13:02:27Z" | ---
language: en
thumbnail: http://www.huggingtweets.com/zaidalyafeai/1654779787447/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1521723273922461696/m8_zotM4_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Zaid زيد</div>
<div style="text-align: center; font-size: 14px;">@zaidalyafeai</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Zaid زيد.
| Data | Zaid زيد |
| --- | --- |
| Tweets downloaded | 2295 |
| Retweets | 74 |
| Short tweets | 217 |
| Tweets kept | 2004 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/39e5cxbb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @zaidalyafeai's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2uc681wq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2uc681wq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/zaidalyafeai')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
fbaldassarri/sapienzanlp_Minerva-7B-base-v1.0-autogptq-int4-gs128-sym | fbaldassarri | "2024-12-28T20:45:25Z" | 7 | 0 | null | [
"safetensors",
"mistral",
"pretrained",
"pytorch",
"causal-lm",
"minerva",
"autoround",
"intel-autoround",
"woq",
"gptq",
"autogptq",
"auto-gptq",
"intel",
"text-generation",
"it",
"en",
"dataset:uonlp/CulturaX",
"base_model:sapienzanlp/Minerva-7B-base-v1.0",
"base_model:quantized:sapienzanlp/Minerva-7B-base-v1.0",
"license:apache-2.0",
"4-bit",
"region:us"
] | text-generation | "2024-12-28T20:23:27Z" | ---
language:
- it
- en
tags:
- pretrained
- pytorch
- causal-lm
- minerva
- autoround
- intel-autoround
- woq
- gptq
- autogptq
- auto-gptq
- intel
license: apache-2.0
model_name: Minerva 7B base v1.0
base_model:
- sapienzanlp/Minerva-7B-base-v1.0
inference: false
model_creator: sapienzanlp
datasets:
- uonlp/CulturaX
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: fbaldassarri
---
## Model Information
Quantized version of [sapienzanlp/Minerva-7B-base-v1.0](https://huggingface.co/sapienzanlp/Minerva-7B-base-v1.0) using torch.float32 for quantization tuning.
- 4 bits (INT4)
- group size = 128
- Symmetrical Quantization
- Method AutoGPTQ
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.4.3
Note: this INT4 version of Minerva-7B-base-v1.0 has been quantized to run inference through CPU.
## Replication Recipe
### Step 1 Install Requirements
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
```
wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.3.tar.gz
tar -xvzf v0.4.3.tar.gz
cd auto-round-0.4.3
pip install -r requirements-cpu.txt --upgrade
```
### Step 2 Build Intel AutoRound wheel from sources
```
pip install -vvv --no-build-isolation -e .[cpu]
```
### Step 3 Script for Quantization
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "sapienzanlp/Minerva-7B-base-v1.0"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
from auto_round import AutoRound
bits, group_size, sym, device, amp = 4, 128, True, 'cpu', False
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
autoround.quantize()
output_dir = "./AutoRound/sapienzanlp_Minerva-7B-base-v1.0-autogptq-int4-gs128-sym"
autoround.save_quantized(output_dir, format='auto_gptq', inplace=True)
```
## License
[Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/)
## Disclaimer
This quantized model comes with no warranty. It has been developed only for research purposes.
|
joshcx/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit | joshcx | "2025-02-23T06:04:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-23T06:04:44Z" | ---
base_model: unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** joshcx
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LLM-Opt/TempNet-LLaMA2-Chat-70B-v0.1 | LLM-Opt | "2024-04-09T12:45:59Z" | 1 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2404.04575",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-29T00:11:16Z" | ---
license: mit
---
<h1 align="center">To Cool or not to Cool? <br>
Temperature Network Meets Large Foundation Models via DRO </h1>
The temperature parameter plays a profound role during training and/or inference with large foundation models (LFMs) such as large language models (LLMs) and CLIP models. Particularly, it adjusts the logits in the softmax function in LLMs, which is crucial for next token generation, and it scales the similarities in the contrastive loss for training CLIP models. A significant question remains: "*Is it viable to learn a neural network to predict a personalized temperature of any input data for enhancing LFMs?*" In this paper, we present **a principled framework** for learning a small yet generalizable temperature prediction network (TempNet) to improve LFMs. Our solution is composed of a novel learning framework with robust losses underpinned by constrained distributionally robust optimization (DRO), and a properly designed TempNet with theoretical inspiration. TempNet can be trained together with a large foundation model from scratch or learned separately given a pretrained foundation model. It is not only useful for predicting personalized temperature to promote the training of LFMs but also generalizable and transferable to new tasks. Our experiments on LLMs and CLIP models demonstrate that TempNet greatly improves the performance of existing solutions or models.
### Table of Contents
- [Introduction](#introduction)
- [Training](#training)
- [Inference](#inference)
- [Acknowledgment](#acknowledgment)
- [Citation](#citation)
## Introduction
### Our Proposed Method
We introduce **a principled framework** for developing a small yet generalizable network for temperature prediction, TempNet, aimed at enhancing large foundation models (LFMs) such as large language models (LLMs) and CLIP models. The Temperature Network is a plug-and-play architecture that can be implemented atop LFMs. Our solution is composed of a novel learning framework with robust losses underpinned by constrained distributionally robust optimization (DRO), and a properly designed TempNet with theoretical inspiration. TempNet can be trained together with a large foundation model from scratch or learned separately given a pretrained foundation model. It is not only useful for predicting personalized temperature to promote the training of LFMs but also generalizable and transferable to new tasks.
<div align="center" style="display: flex; justify-content: center; align-items: center;">
<img src="images/tempnet_overall.jpg" style="width: 70%;"/>
</div>
In the figure above, we present the framework of training LFMs with TempNet on the left and the structure of TempNet on the right.
### Experimental Results
Results of training LLMs in various settings, including training from scratch, finetuning a pretrained LLM model, and learning TempNet only with a frozen LLM model.
<div align="center">
<img src="images/exp1.jpg" width="80%"/>
</div>
Results on contrastive learning. For image-text retrieval on Flickr30K and MSCOCO, we compute IR@1 and TR@1 for the Recall@1 on image-retrieval (IR) and text-retrieval (TR). For classification tasks, we compute top-1 accuracy (\%). We report the average of scores and standard deviation over 3 runs with different random seeds.
<div align="center">
<img src="images/exp_2.jpg" width="80%"/>
</div>
In the following experiments, we investigate two components of our framework: the DRO-based robust loss and the role of TempNet. One can observe that both components significantly impact performance.
<div align="center">
<img src="images/exp3.jpg" width="80%"/>
</div>
To test TempNet's performance in instruction-following tasks, we fix the LLaMA2 Chat models and trained TempNet, then test them on the [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmark. We present the results on three different model sizes in the table below, including the training times of TempNet on Nvidia A100-80G GPUs and their win rates on AlpacaEval data. The results demonstrate that our TempNet can converge quickly and achieve consistent improvements.
<div align="center">
<img src="images/exp4.jpg" width="40%"/>
</div>
Here, we reveal why TempNet enhances performance by comparing the performances of LLaMA2 7B Chat (with the default $\tau=0.7$) and LLaMA2 7B Chat + TempNet on the AlpacaEval dataset. We select a representative example for which the AlpacaEval annotator, GPT-4, deems the response from LLaMA2 + TempNet to be not only superior to that of LLaMA but also better than the baseline response generated by GPT-4.
<div align="center">
<img src="images/exp5.jpg" width="60%"/>
</div>
<div align="center">
<img src="images/exp6.jpg" width="60%"/>
</div>
It is a relatively subjective task to naming a dish. When the temperature value is lower, it can be observed that the LLaMA2 7B Chat model's output is relatively fixed and lacks creativity. With a higher temperature, the model generates more creative names. With TempNet, in the process of generating names for this task, LLaMA2 7B Chat produces a higher averaged temperature value of 0.82, ultimately creating a novel name **Tunanadoes**.
We further demonstrate the predicted temperature parameter produced by the TempNet each time a token here. One can clearly observe that when the potential possibilities for the token to be predicted are numerous, the temperature values are higher. Conversely, when there are fewer potential possibilities for the token to be predicted, the temperature values are lower.
<div align="center">
<img src="images/pred_tau.jpg" width="100%"/>
</div>
### More Details
For more details, please refer to our [paper](http://arxiv.org/abs/2404.04575)
## Training
We conduct experiments across various tasks and models to validate the effectiveness of TempNet. Given the different training frameworks required by each model, we distribute the training code for different models across four directories: `GPT_2`, `LLaMA-1`, `LLaMA-2`, and `Bimodal-CL`.
## Inference
We upload the base models for LLaMA 2 Chat 7B, 13B, 70B, and their respective TempNets to [Hugging Face](https://huggingface.co/LLM-Opt). The `tempnet.py` in the [repository](https://github.com/zhqiu/TempNet) contains the definition of the TempNet class and a class that inherits from Hugging Face's LLaMA, including TempNet. People can download this file and use the following code to perform inference with LLaMA that incorporates TempNet.
```python
import torch
from tempnet import LLaMA_TempNet
from transformers import AutoTokenizer, GenerationConfig
model_name = 'LLM-Opt/TempNet-LLaMA2-Chat-7B-v0.1'
tokenizer = AutoTokenizer.from_pretrained(model_name, legacy=False)
generation_config = GenerationConfig.from_pretrained(model_name)
model = LLaMA_TempNet.from_pretrained(model_name, device_map="auto", torch_dtype=torch.float16)
inputs = 'How do you get water in the desert?'
input_ids = tokenizer(inputs, return_tensors="pt").input_ids.cuda()
outputs = model.generate(input_ids, generation_config=generation_config)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)[len(inputs)-1:].strip()
```
## Acknowledgment
This repository benefits from [ALBEF](https://github.com/salesforce/ALBEF), [GPT-NeoX](https://github.com/EleutherAI/gpt-neox), [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai), [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness), [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca), and [DeepSpeed](https://github.com/microsoft/DeepSpeed).
Thanks for their wonderful works and their efforts to further research.
## Citation
If you find this tutorial helpful, please cite our paper:
```
@article{qiu2024to,
title={To Cool or not to Cool? Temperature Network Meets Large Foundation Models via DRO},
author={Zi-Hao Qiu, Siqi Guo, Mao Xu, Tuo Zhao, Lijun Zhang, and Tianbao Yang},
journal={arXiv preprint arXiv:2404.04575},
year={2024}
}
```
|
DevQuasar/huihui-ai.granite-3.1-8b-instruct-abliterated-GGUF | DevQuasar | "2025-02-01T23:13:22Z" | 113 | 0 | null | [
"gguf",
"text-generation",
"base_model:huihui-ai/granite-3.1-8b-instruct-abliterated",
"base_model:quantized:huihui-ai/granite-3.1-8b-instruct-abliterated",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-12-21T16:37:52Z" | ---
base_model:
- huihui-ai/granite-3.1-8b-instruct-abliterated
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
'Make knowledge free for everyone'
Quantized version of: [huihui-ai/granite-3.1-8b-instruct-abliterated](https://huggingface.co/huihui-ai/granite-3.1-8b-instruct-abliterated)
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
DOOGLAK/Article_50v5_NER_Model_3Epochs_AUGMENTED | DOOGLAK | "2022-08-11T08:25:42Z" | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:article50v5_wikigold_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-08-11T08:19:49Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- article50v5_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Article_50v5_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: article50v5_wikigold_split
type: article50v5_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.29246676514032494
- name: Recall
type: recall
value: 0.1442097596504006
- name: F1
type: f1
value: 0.19317073170731708
- name: Accuracy
type: accuracy
value: 0.8181431100553527
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Article_50v5_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article50v5_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4652
- Precision: 0.2925
- Recall: 0.1442
- F1: 0.1932
- Accuracy: 0.8181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 26 | 0.5854 | 0.2054 | 0.0056 | 0.0109 | 0.7805 |
| No log | 2.0 | 52 | 0.4686 | 0.2819 | 0.1224 | 0.1706 | 0.8128 |
| No log | 3.0 | 78 | 0.4652 | 0.2925 | 0.1442 | 0.1932 | 0.8181 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
MrRobotoAI/DarkIdol-LongWriter-v13-8B-Uncensored-1048k-Q4_K_M-GGUF | MrRobotoAI | "2025-02-12T10:47:04Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:MrRobotoAI/DarkIdol-LongWriter-v13-8B-Uncensored-1048k",
"base_model:quantized:MrRobotoAI/DarkIdol-LongWriter-v13-8B-Uncensored-1048k",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-12T10:46:33Z" | ---
base_model: MrRobotoAI/DarkIdol-LongWriter-v13-8B-Uncensored-1048k
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# MrRobotoAI/DarkIdol-LongWriter-v13-8B-Uncensored-1048k-Q4_K_M-GGUF
This model was converted to GGUF format from [`MrRobotoAI/DarkIdol-LongWriter-v13-8B-Uncensored-1048k`](https://huggingface.co/MrRobotoAI/DarkIdol-LongWriter-v13-8B-Uncensored-1048k) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MrRobotoAI/DarkIdol-LongWriter-v13-8B-Uncensored-1048k) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo MrRobotoAI/DarkIdol-LongWriter-v13-8B-Uncensored-1048k-Q4_K_M-GGUF --hf-file darkidol-longwriter-v13-8b-uncensored-1048k-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo MrRobotoAI/DarkIdol-LongWriter-v13-8B-Uncensored-1048k-Q4_K_M-GGUF --hf-file darkidol-longwriter-v13-8b-uncensored-1048k-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo MrRobotoAI/DarkIdol-LongWriter-v13-8B-Uncensored-1048k-Q4_K_M-GGUF --hf-file darkidol-longwriter-v13-8b-uncensored-1048k-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo MrRobotoAI/DarkIdol-LongWriter-v13-8B-Uncensored-1048k-Q4_K_M-GGUF --hf-file darkidol-longwriter-v13-8b-uncensored-1048k-q4_k_m.gguf -c 2048
```
|
mradermacher/zephyr-7b-alpha-GGUF | mradermacher | "2024-05-06T05:08:56Z" | 59 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"dataset:stingning/ultrachat",
"dataset:openbmb/UltraFeedback",
"base_model:HuggingFaceH4/zephyr-7b-alpha",
"base_model:quantized:HuggingFaceH4/zephyr-7b-alpha",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-04-08T05:46:11Z" | ---
base_model: HuggingFaceH4/zephyr-7b-alpha
datasets:
- stingning/ultrachat
- openbmb/UltraFeedback
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-alpha-GGUF/resolve/main/zephyr-7b-alpha.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
EmberrJoel/ppo-LunarLander-v2 | EmberrJoel | "2022-12-06T21:03:37Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-12-06T19:31:22Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 291.88 +/- 18.06
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
John6666/black-magic-pony-v25-sdxl | John6666 | "2024-12-23T06:33:35Z" | 78 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"game",
"girls",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-09-30T01:15:20Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- game
- girls
- pony
---
Original model is [here](https://civitai.com/models/783814?modelVersionId=904429).
This model created by [RAMTHRUST](https://civitai.com/user/RAMTHRUST).
|
cleanrl/Pusher-v4-ddpg_continuous_action_jax-seed1 | cleanrl | "2023-06-28T19:12:37Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Pusher-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-06-28T19:12:22Z" | ---
tags:
- Pusher-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DDPG
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pusher-v4
type: Pusher-v4
metrics:
- type: mean_reward
value: -28.99 +/- 2.80
name: mean_reward
verified: false
---
# (CleanRL) **DDPG** Agent Playing **Pusher-v4**
This is a trained model of a DDPG agent playing Pusher-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action_jax.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ddpg_continuous_action_jax]"
python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action_jax --env-id Pusher-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Pusher-v4-ddpg_continuous_action_jax-seed1/raw/main/ddpg_continuous_action_jax.py
curl -OL https://huggingface.co/cleanrl/Pusher-v4-ddpg_continuous_action_jax-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Pusher-v4-ddpg_continuous_action_jax-seed1/raw/main/poetry.lock
poetry install --all-extras
python ddpg_continuous_action_jax.py --track --capture-video --save-model --hf-entity cleanrl --upload-mode --env-id Pusher-v4 --seed 1
```
# Hyperparameters
```python
{'batch_size': 256,
'buffer_size': 1000000,
'capture_video': True,
'env_id': 'Pusher-v4',
'exp_name': 'ddpg_continuous_action_jax',
'exploration_noise': 0.1,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.0003,
'learning_starts': 25000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'save_model': True,
'seed': 1,
'tau': 0.005,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
TheBloke/vicuna-33B-coder-GPTQ | TheBloke | "2023-10-21T09:52:10Z" | 20 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"arxiv:1910.09700",
"base_model:FelixChao/vicuna-33b-coder",
"base_model:quantized:FelixChao/vicuna-33b-coder",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2023-10-21T00:50:43Z" | ---
base_model: FelixChao/vicuna-33b-coder
inference: false
license: other
model-index:
- name: Vicuna-Coder
results:
- dataset:
name: MultiPL-HumanEval (Python)
type: nuprl/MultiPL-E
metrics:
- name: pass@1
type: pass@1
value: 0.274
verified: false
task:
type: text-generation
model_creator: Chao Chang-Yu
model_name: Vicuna 33B Coder
model_type: llama
prompt_template: 'A chat between a curious user and an artificial intelligence assistant.
The assistant gives helpful, detailed, and polite answers to the user''s questions.
USER: {prompt} ASSISTANT:
'
quantized_by: TheBloke
tags:
- code
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Vicuna 33B Coder - GPTQ
- Model creator: [Chao Chang-Yu](https://huggingface.co/FelixChao)
- Original model: [Vicuna 33B Coder](https://huggingface.co/FelixChao/vicuna-33b-coder)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Chao Chang-Yu's Vicuna 33B Coder](https://huggingface.co/FelixChao/vicuna-33b-coder).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/vicuna-33B-coder-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/vicuna-33B-coder-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/vicuna-33B-coder-GGUF)
* [Chao Chang-Yu's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/FelixChao/vicuna-33b-coder)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Vicuna
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/vicuna-33B-coder-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 2048 | 16.94 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/vicuna-33B-coder-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 2048 | 17.55 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/vicuna-33B-coder-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 2048 | 19.44 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/vicuna-33B-coder-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 2048 | 13.51 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/vicuna-33B-coder-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 2048 | 32.99 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/vicuna-33B-coder-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 2048 | 15.30 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/vicuna-33B-coder-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 2048 | 33.73 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/vicuna-33B-coder-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/vicuna-33B-coder-GPTQ:gptq-4bit-128g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `vicuna-33B-coder-GPTQ`:
```shell
mkdir vicuna-33B-coder-GPTQ
huggingface-cli download TheBloke/vicuna-33B-coder-GPTQ --local-dir vicuna-33B-coder-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir vicuna-33B-coder-GPTQ
huggingface-cli download TheBloke/vicuna-33B-coder-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir vicuna-33B-coder-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir vicuna-33B-coder-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/vicuna-33B-coder-GPTQ --local-dir vicuna-33B-coder-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/vicuna-33B-coder-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/vicuna-33B-coder-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/vicuna-33B-coder-GPTQ:gptq-4bit-128g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `vicuna-33B-coder-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/vicuna-33B-coder-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers optimum
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.4.2
pip3 install .
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/vicuna-33B-coder-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Chao Chang-Yu's Vicuna 33B Coder
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
omercanoglusincan/videomae-base-finetuned-ucf101-subset | omercanoglusincan | "2023-07-11T14:06:02Z" | 59 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"vision",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | "2023-07-11T11:48:00Z" | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
- video-classification
- videomae
- vision
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3992
- Accuracy: 0.8645
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 148
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1374 | 0.26 | 38 | 1.7413 | 0.5714 |
| 0.7949 | 1.26 | 76 | 0.7747 | 0.8 |
| 0.4279 | 2.26 | 114 | 0.4053 | 0.9143 |
| 0.291 | 3.23 | 148 | 0.3429 | 0.9286 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
RichardErkhov/kikeavi36_-_OrpoLlama3-3B-FT-gguf | RichardErkhov | "2025-03-28T00:28:01Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-27T23:22:47Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
OrpoLlama3-3B-FT - GGUF
- Model creator: https://huggingface.co/kikeavi36/
- Original model: https://huggingface.co/kikeavi36/OrpoLlama3-3B-FT/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [OrpoLlama3-3B-FT.Q2_K.gguf](https://huggingface.co/RichardErkhov/kikeavi36_-_OrpoLlama3-3B-FT-gguf/blob/main/OrpoLlama3-3B-FT.Q2_K.gguf) | Q2_K | 1.27GB |
| [OrpoLlama3-3B-FT.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/kikeavi36_-_OrpoLlama3-3B-FT-gguf/blob/main/OrpoLlama3-3B-FT.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [OrpoLlama3-3B-FT.IQ3_S.gguf](https://huggingface.co/RichardErkhov/kikeavi36_-_OrpoLlama3-3B-FT-gguf/blob/main/OrpoLlama3-3B-FT.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [OrpoLlama3-3B-FT.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/kikeavi36_-_OrpoLlama3-3B-FT-gguf/blob/main/OrpoLlama3-3B-FT.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [OrpoLlama3-3B-FT.IQ3_M.gguf](https://huggingface.co/RichardErkhov/kikeavi36_-_OrpoLlama3-3B-FT-gguf/blob/main/OrpoLlama3-3B-FT.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [OrpoLlama3-3B-FT.Q3_K.gguf](https://huggingface.co/RichardErkhov/kikeavi36_-_OrpoLlama3-3B-FT-gguf/blob/main/OrpoLlama3-3B-FT.Q3_K.gguf) | Q3_K | 1.57GB |
| [OrpoLlama3-3B-FT.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/kikeavi36_-_OrpoLlama3-3B-FT-gguf/blob/main/OrpoLlama3-3B-FT.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [OrpoLlama3-3B-FT.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/kikeavi36_-_OrpoLlama3-3B-FT-gguf/blob/main/OrpoLlama3-3B-FT.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [OrpoLlama3-3B-FT.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/kikeavi36_-_OrpoLlama3-3B-FT-gguf/blob/main/OrpoLlama3-3B-FT.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [OrpoLlama3-3B-FT.Q4_0.gguf](https://huggingface.co/RichardErkhov/kikeavi36_-_OrpoLlama3-3B-FT-gguf/blob/main/OrpoLlama3-3B-FT.Q4_0.gguf) | Q4_0 | 1.79GB |
| [OrpoLlama3-3B-FT.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/kikeavi36_-_OrpoLlama3-3B-FT-gguf/blob/main/OrpoLlama3-3B-FT.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [OrpoLlama3-3B-FT.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/kikeavi36_-_OrpoLlama3-3B-FT-gguf/blob/main/OrpoLlama3-3B-FT.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [OrpoLlama3-3B-FT.Q4_K.gguf](https://huggingface.co/RichardErkhov/kikeavi36_-_OrpoLlama3-3B-FT-gguf/blob/main/OrpoLlama3-3B-FT.Q4_K.gguf) | Q4_K | 1.88GB |
| [OrpoLlama3-3B-FT.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/kikeavi36_-_OrpoLlama3-3B-FT-gguf/blob/main/OrpoLlama3-3B-FT.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [OrpoLlama3-3B-FT.Q4_1.gguf](https://huggingface.co/RichardErkhov/kikeavi36_-_OrpoLlama3-3B-FT-gguf/blob/main/OrpoLlama3-3B-FT.Q4_1.gguf) | Q4_1 | 1.95GB |
| [OrpoLlama3-3B-FT.Q5_0.gguf](https://huggingface.co/RichardErkhov/kikeavi36_-_OrpoLlama3-3B-FT-gguf/blob/main/OrpoLlama3-3B-FT.Q5_0.gguf) | Q5_0 | 2.11GB |
| [OrpoLlama3-3B-FT.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/kikeavi36_-_OrpoLlama3-3B-FT-gguf/blob/main/OrpoLlama3-3B-FT.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [OrpoLlama3-3B-FT.Q5_K.gguf](https://huggingface.co/RichardErkhov/kikeavi36_-_OrpoLlama3-3B-FT-gguf/blob/main/OrpoLlama3-3B-FT.Q5_K.gguf) | Q5_K | 2.16GB |
| [OrpoLlama3-3B-FT.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/kikeavi36_-_OrpoLlama3-3B-FT-gguf/blob/main/OrpoLlama3-3B-FT.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [OrpoLlama3-3B-FT.Q5_1.gguf](https://huggingface.co/RichardErkhov/kikeavi36_-_OrpoLlama3-3B-FT-gguf/blob/main/OrpoLlama3-3B-FT.Q5_1.gguf) | Q5_1 | 2.28GB |
| [OrpoLlama3-3B-FT.Q6_K.gguf](https://huggingface.co/RichardErkhov/kikeavi36_-_OrpoLlama3-3B-FT-gguf/blob/main/OrpoLlama3-3B-FT.Q6_K.gguf) | Q6_K | 2.46GB |
| [OrpoLlama3-3B-FT.Q8_0.gguf](https://huggingface.co/RichardErkhov/kikeavi36_-_OrpoLlama3-3B-FT-gguf/blob/main/OrpoLlama3-3B-FT.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aamijar/Llama-2-7b-hf-lora-r4096-rte-NEW-portlora-epochs8 | aamijar | "2025-04-13T06:02:21Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-13T06:02:20Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yosefw/bert-medium-am-embed | yosefw | "2024-12-26T07:35:51Z" | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"base_model:rasyosef/bert-medium-amharic",
"base_model:finetune:rasyosef/bert-medium-amharic",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-12-26T07:35:38Z" | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
base_model: rasyosef/bert-medium-amharic
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on rasyosef/bert-medium-amharic
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [rasyosef/bert-medium-amharic](https://huggingface.co/rasyosef/bert-medium-amharic). It maps sentences & paragraphs to a 512-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [rasyosef/bert-medium-amharic](https://huggingface.co/rasyosef/bert-medium-amharic) <!-- at revision cbe8e1aeefcd7c9e45dd0742c859aae9b03905f1 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 512 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("yosefw/bert-medium-am-embed")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 512]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.47.1
- PyTorch: 2.5.1+cu121
- Accelerate: 1.2.1
- Datasets:
- Tokenizers: 0.21.0
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
mrdayl/OpenCognito | mrdayl | "2025-03-07T20:35:54Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"conversational",
"en",
"base_model:mrdayl/OpenCogito",
"base_model:quantized:mrdayl/OpenCogito",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-07T18:10:04Z" | ---
base_model: mrdayl/OpenCogito
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mrdayl
- **License:** apache-2.0
- **Finetuned from model :** mrdayl/OpenCogito
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Abhi964/MahaPhrase_IndicBERT_Finetuning_3 | Abhi964 | "2025-03-03T05:43:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:ai4bharat/indic-bert",
"base_model:finetune:ai4bharat/indic-bert",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-03T05:43:42Z" | ---
library_name: transformers
license: mit
base_model: ai4bharat/indic-bert
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: MahaPhrase_IndicBERT_Finetuning_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MahaPhrase_IndicBERT_Finetuning_3
This model is a fine-tuned version of [ai4bharat/indic-bert](https://huggingface.co/ai4bharat/indic-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3427
- Accuracy: 0.868
- F1: 0.8675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.9441685921426482e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6401 | 1.0 | 71 | 0.5991 | 0.684 | 0.6831 |
| 0.5377 | 2.0 | 142 | 0.5039 | 0.732 | 0.7297 |
| 0.3809 | 3.0 | 213 | 0.4500 | 0.804 | 0.7898 |
| 0.2201 | 4.0 | 284 | 0.3427 | 0.868 | 0.8675 |
| 0.1614 | 5.0 | 355 | 0.3923 | 0.856 | 0.8558 |
| 0.1114 | 6.0 | 426 | 0.3913 | 0.864 | 0.8620 |
| 0.1084 | 7.0 | 497 | 0.4789 | 0.844 | 0.8439 |
| 0.0457 | 8.0 | 568 | 0.4538 | 0.856 | 0.8557 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
legraphista/Yi-Coder-9B-IMat-GGUF | legraphista | "2024-09-05T16:20:37Z" | 158 | 0 | gguf | [
"gguf",
"quantized",
"GGUF",
"quantization",
"imat",
"imatrix",
"static",
"16bit",
"8bit",
"6bit",
"5bit",
"4bit",
"3bit",
"2bit",
"1bit",
"text-generation",
"base_model:01-ai/Yi-Coder-9B",
"base_model:quantized:01-ai/Yi-Coder-9B",
"license:apache-2.0",
"region:us"
] | text-generation | "2024-09-05T15:46:26Z" | ---
base_model: 01-ai/Yi-Coder-9B
inference: false
library_name: gguf
license: apache-2.0
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- quantized
- GGUF
- quantization
- imat
- imatrix
- static
- 16bit
- 8bit
- 6bit
- 5bit
- 4bit
- 3bit
- 2bit
- 1bit
---
# Yi-Coder-9B-IMat-GGUF
_Llama.cpp imatrix quantization of 01-ai/Yi-Coder-9B_
Original Model: [01-ai/Yi-Coder-9B](https://huggingface.co/01-ai/Yi-Coder-9B)
Original dtype: `BF16` (`bfloat16`)
Quantized by: llama.cpp [b3669](https://github.com/ggerganov/llama.cpp/releases/tag/b3669)
IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt)
- [Files](#files)
- [IMatrix](#imatrix)
- [Common Quants](#common-quants)
- [All Quants](#all-quants)
- [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
- [Inference](#inference)
- [Llama.cpp](#llama-cpp)
- [FAQ](#faq)
- [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
- [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
---
## Files
### IMatrix
Status: ✅ Available
Link: [here](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/imatrix.dat)
### Common Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [Yi-Coder-9B.Q8_0.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.Q8_0.gguf) | Q8_0 | 9.38GB | ✅ Available | ⚪ Static | 📦 No
| [Yi-Coder-9B.Q6_K.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.Q6_K.gguf) | Q6_K | 7.25GB | ✅ Available | ⚪ Static | 📦 No
| [Yi-Coder-9B.Q4_K.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.Q4_K.gguf) | Q4_K | 5.33GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-Coder-9B.Q3_K.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.Q3_K.gguf) | Q3_K | 4.32GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-Coder-9B.Q2_K.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.Q2_K.gguf) | Q2_K | 3.35GB | ✅ Available | 🟢 IMatrix | 📦 No
### All Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [Yi-Coder-9B.BF16.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.BF16.gguf) | BF16 | 17.66GB | ✅ Available | ⚪ Static | 📦 No
| [Yi-Coder-9B.FP16.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.FP16.gguf) | F16 | 17.66GB | ✅ Available | ⚪ Static | 📦 No
| [Yi-Coder-9B.Q8_0.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.Q8_0.gguf) | Q8_0 | 9.38GB | ✅ Available | ⚪ Static | 📦 No
| [Yi-Coder-9B.Q6_K.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.Q6_K.gguf) | Q6_K | 7.25GB | ✅ Available | ⚪ Static | 📦 No
| [Yi-Coder-9B.Q5_K.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.Q5_K.gguf) | Q5_K | 6.26GB | ✅ Available | ⚪ Static | 📦 No
| [Yi-Coder-9B.Q5_K_S.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.Q5_K_S.gguf) | Q5_K_S | 6.11GB | ✅ Available | ⚪ Static | 📦 No
| [Yi-Coder-9B.Q4_K.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.Q4_K.gguf) | Q4_K | 5.33GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-Coder-9B.Q4_K_S.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.Q4_K_S.gguf) | Q4_K_S | 5.07GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-Coder-9B.IQ4_NL.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.IQ4_NL.gguf) | IQ4_NL | 5.05GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-Coder-9B.IQ4_XS.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.IQ4_XS.gguf) | IQ4_XS | 4.79GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-Coder-9B.Q3_K.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.Q3_K.gguf) | Q3_K | 4.32GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-Coder-9B.Q3_K_L.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.Q3_K_L.gguf) | Q3_K_L | 4.69GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-Coder-9B.Q3_K_S.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.Q3_K_S.gguf) | Q3_K_S | 3.90GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-Coder-9B.IQ3_M.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.IQ3_M.gguf) | IQ3_M | 4.06GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-Coder-9B.IQ3_S.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.IQ3_S.gguf) | IQ3_S | 3.91GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-Coder-9B.IQ3_XS.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.IQ3_XS.gguf) | IQ3_XS | 3.72GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-Coder-9B.IQ3_XXS.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.IQ3_XXS.gguf) | IQ3_XXS | 3.47GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-Coder-9B.Q2_K.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.Q2_K.gguf) | Q2_K | 3.35GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-Coder-9B.Q2_K_S.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.Q2_K_S.gguf) | Q2_K_S | 3.12GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-Coder-9B.IQ2_M.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.IQ2_M.gguf) | IQ2_M | 3.10GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-Coder-9B.IQ2_S.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.IQ2_S.gguf) | IQ2_S | 2.88GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-Coder-9B.IQ2_XS.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.IQ2_XS.gguf) | IQ2_XS | 2.71GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-Coder-9B.IQ2_XXS.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.IQ2_XXS.gguf) | IQ2_XXS | 2.46GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-Coder-9B.IQ1_M.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.IQ1_M.gguf) | IQ1_M | 2.18GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-Coder-9B.IQ1_S.gguf](https://huggingface.co/legraphista/Yi-Coder-9B-IMat-GGUF/blob/main/Yi-Coder-9B.IQ1_S.gguf) | IQ1_S | 2.01GB | ✅ Available | 🟢 IMatrix | 📦 No
## Downloading using huggingface-cli
If you do not have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Download the specific file you want:
```
huggingface-cli download legraphista/Yi-Coder-9B-IMat-GGUF --include "Yi-Coder-9B.Q8_0.gguf" --local-dir ./
```
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download legraphista/Yi-Coder-9B-IMat-GGUF --include "Yi-Coder-9B.Q8_0/*" --local-dir ./
# see FAQ for merging GGUF's
```
---
## Inference
### Llama.cpp
```
llama.cpp/main -m Yi-Coder-9B.Q8_0.gguf --color -i -p "prompt here"
```
---
## FAQ
### Why is the IMatrix not applied everywhere?
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
### How do I merge a split GGUF?
1. Make sure you have `gguf-split` available
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
- Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find `gguf-split`
2. Locate your GGUF chunks folder (ex: `Yi-Coder-9B.Q8_0`)
3. Run `gguf-split --merge Yi-Coder-9B.Q8_0/Yi-Coder-9B.Q8_0-00001-of-XXXXX.gguf Yi-Coder-9B.Q8_0.gguf`
- Make sure to point `gguf-split` to the first chunk of the split.
---
Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)! |
Seanxh/Qwen-Qwen1.5-1.8B-1719286596 | Seanxh | "2024-06-25T03:38:22Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-25T03:36:37Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
compressa-ai/Meta-Llama-3-8B-Instruct-medchat-LoRA | compressa-ai | "2024-05-14T10:37:17Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"base_model:adapter:NousResearch/Meta-Llama-3-8B-Instruct",
"region:us"
] | null | "2024-05-13T04:43:43Z" | ---
library_name: peft
base_model: NousResearch/Meta-Llama-3-8B-Instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
mradermacher/DermVLM-GGUF | mradermacher | "2025-04-11T20:09:46Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:swapnillo/DermVLM",
"base_model:quantized:swapnillo/DermVLM",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-11T20:00:04Z" | ---
base_model: swapnillo/DermVLM
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/swapnillo/DermVLM
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DermVLM-GGUF/resolve/main/DermVLM.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/DermVLM-GGUF/resolve/main/DermVLM.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/DermVLM-GGUF/resolve/main/DermVLM.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DermVLM-GGUF/resolve/main/DermVLM.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/DermVLM-GGUF/resolve/main/DermVLM.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/DermVLM-GGUF/resolve/main/DermVLM.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DermVLM-GGUF/resolve/main/DermVLM.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DermVLM-GGUF/resolve/main/DermVLM.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/DermVLM-GGUF/resolve/main/DermVLM.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/DermVLM-GGUF/resolve/main/DermVLM.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DermVLM-GGUF/resolve/main/DermVLM.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DermVLM-GGUF/resolve/main/DermVLM.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
TFOCUS/bruno_tester_21 | TFOCUS | "2025-03-06T11:46:15Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-06T11:41:18Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Adhithyamadesh2001/DeepSeek-R1-Medical-COT | Adhithyamadesh2001 | "2025-04-10T10:03:25Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-10T09:57:52Z" | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Adhithyamadesh2001
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jstawski/Llama-2-13b-hf-finetuned-SNG | jstawski | "2023-07-24T23:32:04Z" | 0 | 1 | peft | [
"peft",
"conversational",
"en",
"license:llama2",
"region:us"
] | text-generation | "2023-07-24T03:25:41Z" | ---
license: llama2
library_name: peft
language:
- en
pipeline_tag: conversational
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0 |
Syouf/MAVlinkFunctions8bit | Syouf | "2024-12-27T19:02:01Z" | 86 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-12-27T18:52:30Z" | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kaizoku56/bertmodel | kaizoku56 | "2024-05-03T11:14:55Z" | 164 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-03T11:14:44Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kudod/my_fine_tuning_summary_t5_large_IA_model_hf | Kudod | "2024-02-20T08:06:37Z" | 3 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google-t5/t5-large",
"base_model:adapter:google-t5/t5-large",
"license:apache-2.0",
"region:us"
] | null | "2024-02-20T07:01:30Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
metrics:
- rouge
base_model: google-t5/t5-large
model-index:
- name: my_fine_tuning_summary_t5_large_IA_model_hf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_fine_tuning_summary_t5_large_IA_model_hf
This model is a fine-tuned version of [google-t5/t5-large](https://huggingface.co/google-t5/t5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.1345
- Rouge2: 0.0519
- Rougel: 0.1119
- Rougelsum: 0.112
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 989 | nan | 0.1345 | 0.0519 | 0.1119 | 0.112 | 19.0 |
| 0.0 | 2.0 | 1978 | nan | 0.1345 | 0.0519 | 0.1119 | 0.112 | 19.0 |
| 0.0 | 3.0 | 2967 | nan | 0.1345 | 0.0519 | 0.1119 | 0.112 | 19.0 |
| 0.0 | 4.0 | 3956 | nan | 0.1345 | 0.0519 | 0.1119 | 0.112 | 19.0 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.15.2 |
LHRuig/mirsx | LHRuig | "2025-03-25T21:00:22Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-03-25T21:00:03Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: mirsx
---
# mirsx
<Gallery />
## Model description
mirsx lora
## Trigger words
You should use `mirsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/mirsx/tree/main) them in the Files & versions tab.
|
922-CA/gem-monika-ddlc-9b-v1-gguf | 922-CA | "2024-08-03T13:51:04Z" | 32 | 1 | transformers | [
"transformers",
"gguf",
"gemma2",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/gemma-2-9b-bnb-4bit",
"base_model:quantized:unsloth/gemma-2-9b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-16T07:20:22Z" | ---
base_model: unsloth/gemma-2-9b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- gguf
---
# Uploaded model
- **Developed by:** 922-CA
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
GGUFs of [Gemmonika-ddlc-9b-v1](https://huggingface.co/922-CA/Gemmonika-ddlc-9b-v1). (Primarily tested and run with Koboldcpp v1.68+).
|
tollea1234/vila-v1.5-3b-sft-tune-mm-projector-lora | tollea1234 | "2025-03-10T04:06:42Z" | 0 | 0 | null | [
"safetensors",
"llava_llama",
"license:apache-2.0",
"region:us"
] | null | "2025-03-10T04:03:15Z" | ---
license: apache-2.0
---
|
nhung02/02eb1b29-406d-4772-8707-d02ac5126053 | nhung02 | "2025-01-11T05:28:39Z" | 12 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codellama-7b",
"base_model:adapter:unsloth/codellama-7b",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-11T05:19:20Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/codellama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 02eb1b29-406d-4772-8707-d02ac5126053
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/codellama-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 38f7c84a170fcb5b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/38f7c84a170fcb5b_train_data.json
type:
field_input: incorrect_answers
field_instruction: question
field_output: best_answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung02/02eb1b29-406d-4772-8707-d02ac5126053
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/38f7c84a170fcb5b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5d0bae07-86e0-4b29-9034-e1f8698fa7eb
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5d0bae07-86e0-4b29-9034-e1f8698fa7eb
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 02eb1b29-406d-4772-8707-d02ac5126053
This model is a fine-tuned version of [unsloth/codellama-7b](https://huggingface.co/unsloth/codellama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5679
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 76
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6108 | 0.9967 | 75 | 0.5677 |
| 1.0757 | 1.0100 | 76 | 0.5679 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
golf2248/mt4ymf7 | golf2248 | "2025-03-17T20:16:09Z" | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-03-17T20:15:55Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
JacksonBrune/7c8ed959-8ad8-4ab4-9845-2acba5b35fdc | JacksonBrune | "2025-02-15T11:40:48Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"bloom",
"axolotl",
"generated_from_trainer",
"base_model:bigscience/bloomz-560m",
"base_model:adapter:bigscience/bloomz-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | "2025-02-15T11:29:06Z" | ---
library_name: peft
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloomz-560m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7c8ed959-8ad8-4ab4-9845-2acba5b35fdc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 7c8ed959-8ad8-4ab4-9845-2acba5b35fdc
This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
waniafatima/my_awesome_model | waniafatima | "2024-03-13T09:28:59Z" | 93 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-03-13T09:28:08Z" | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
genki10/ASAP_FineTuningBERT_AugV10_k1_task1_organization_k1_k1_fold1 | genki10 | "2025-02-12T20:44:10Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-02-12T20:25:12Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_AugV10_k1_task1_organization_k1_k1_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_AugV10_k1_task1_organization_k1_k1_fold1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7237
- Qwk: 0.4891
- Mse: 0.7228
- Rmse: 0.8502
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|
| No log | 1.0 | 2 | 11.8126 | -0.0012 | 11.8098 | 3.4365 |
| No log | 2.0 | 4 | 8.8194 | 0.0018 | 8.8168 | 2.9693 |
| No log | 3.0 | 6 | 7.1565 | 0.0 | 7.1544 | 2.6748 |
| No log | 4.0 | 8 | 5.6503 | 0.0293 | 5.6478 | 2.3765 |
| No log | 5.0 | 10 | 4.1698 | 0.0113 | 4.1677 | 2.0415 |
| No log | 6.0 | 12 | 3.0803 | 0.0 | 3.0784 | 1.7545 |
| No log | 7.0 | 14 | 2.2671 | 0.0897 | 2.2656 | 1.5052 |
| No log | 8.0 | 16 | 2.1328 | 0.1153 | 2.1308 | 1.4597 |
| No log | 9.0 | 18 | 1.3536 | 0.0530 | 1.3521 | 1.1628 |
| No log | 10.0 | 20 | 1.1401 | 0.0276 | 1.1389 | 1.0672 |
| No log | 11.0 | 22 | 1.3780 | 0.0812 | 1.3764 | 1.1732 |
| No log | 12.0 | 24 | 1.4768 | 0.0952 | 1.4753 | 1.2146 |
| No log | 13.0 | 26 | 1.1228 | 0.0545 | 1.1214 | 1.0589 |
| No log | 14.0 | 28 | 0.7256 | 0.3948 | 0.7245 | 0.8512 |
| No log | 15.0 | 30 | 0.7811 | 0.3232 | 0.7799 | 0.8831 |
| No log | 16.0 | 32 | 1.2622 | 0.1436 | 1.2607 | 1.1228 |
| No log | 17.0 | 34 | 1.2646 | 0.1810 | 1.2632 | 1.1239 |
| No log | 18.0 | 36 | 0.6563 | 0.4899 | 0.6552 | 0.8095 |
| No log | 19.0 | 38 | 0.6532 | 0.4552 | 0.6522 | 0.8076 |
| No log | 20.0 | 40 | 0.7979 | 0.3931 | 0.7969 | 0.8927 |
| No log | 21.0 | 42 | 1.1143 | 0.3244 | 1.1130 | 1.0550 |
| No log | 22.0 | 44 | 0.8656 | 0.3941 | 0.8645 | 0.9298 |
| No log | 23.0 | 46 | 0.8016 | 0.3268 | 0.8004 | 0.8947 |
| No log | 24.0 | 48 | 0.8369 | 0.2796 | 0.8359 | 0.9142 |
| No log | 25.0 | 50 | 0.8279 | 0.3699 | 0.8268 | 0.9093 |
| No log | 26.0 | 52 | 1.0949 | 0.2851 | 1.0935 | 1.0457 |
| No log | 27.0 | 54 | 0.7944 | 0.4119 | 0.7933 | 0.8907 |
| No log | 28.0 | 56 | 0.6565 | 0.4668 | 0.6555 | 0.8096 |
| No log | 29.0 | 58 | 0.6489 | 0.5001 | 0.6480 | 0.8050 |
| No log | 30.0 | 60 | 1.0923 | 0.3500 | 1.0909 | 1.0445 |
| No log | 31.0 | 62 | 0.9487 | 0.3575 | 0.9474 | 0.9733 |
| No log | 32.0 | 64 | 0.7333 | 0.4160 | 0.7321 | 0.8556 |
| No log | 33.0 | 66 | 0.7510 | 0.4554 | 0.7500 | 0.8660 |
| No log | 34.0 | 68 | 0.7744 | 0.4322 | 0.7733 | 0.8793 |
| No log | 35.0 | 70 | 0.7899 | 0.4452 | 0.7887 | 0.8881 |
| No log | 36.0 | 72 | 0.7386 | 0.4732 | 0.7375 | 0.8588 |
| No log | 37.0 | 74 | 0.7347 | 0.4737 | 0.7337 | 0.8566 |
| No log | 38.0 | 76 | 0.6960 | 0.4760 | 0.6948 | 0.8335 |
| No log | 39.0 | 78 | 0.6836 | 0.4850 | 0.6826 | 0.8262 |
| No log | 40.0 | 80 | 0.6990 | 0.5238 | 0.6979 | 0.8354 |
| No log | 41.0 | 82 | 0.8802 | 0.4463 | 0.8789 | 0.9375 |
| No log | 42.0 | 84 | 0.7950 | 0.5027 | 0.7938 | 0.8909 |
| No log | 43.0 | 86 | 0.7362 | 0.5243 | 0.7353 | 0.8575 |
| No log | 44.0 | 88 | 0.6640 | 0.5501 | 0.6631 | 0.8143 |
| No log | 45.0 | 90 | 0.6441 | 0.5246 | 0.6431 | 0.8019 |
| No log | 46.0 | 92 | 0.6272 | 0.5272 | 0.6262 | 0.7914 |
| No log | 47.0 | 94 | 0.6510 | 0.5273 | 0.6502 | 0.8064 |
| No log | 48.0 | 96 | 0.6522 | 0.5494 | 0.6513 | 0.8070 |
| No log | 49.0 | 98 | 0.7022 | 0.5059 | 0.7013 | 0.8374 |
| No log | 50.0 | 100 | 0.7579 | 0.4811 | 0.7570 | 0.8701 |
| No log | 51.0 | 102 | 0.8038 | 0.4587 | 0.8031 | 0.8962 |
| No log | 52.0 | 104 | 0.7196 | 0.4735 | 0.7188 | 0.8478 |
| No log | 53.0 | 106 | 0.7266 | 0.4595 | 0.7255 | 0.8517 |
| No log | 54.0 | 108 | 0.6815 | 0.4797 | 0.6804 | 0.8249 |
| No log | 55.0 | 110 | 0.6235 | 0.5540 | 0.6226 | 0.7891 |
| No log | 56.0 | 112 | 0.6304 | 0.5521 | 0.6295 | 0.7934 |
| No log | 57.0 | 114 | 0.7442 | 0.4946 | 0.7432 | 0.8621 |
| No log | 58.0 | 116 | 0.7706 | 0.4792 | 0.7695 | 0.8772 |
| No log | 59.0 | 118 | 0.6790 | 0.5286 | 0.6782 | 0.8235 |
| No log | 60.0 | 120 | 0.7019 | 0.5094 | 0.7011 | 0.8373 |
| No log | 61.0 | 122 | 0.7175 | 0.4544 | 0.7166 | 0.8465 |
| No log | 62.0 | 124 | 0.7315 | 0.4712 | 0.7306 | 0.8547 |
| No log | 63.0 | 126 | 0.7546 | 0.4544 | 0.7537 | 0.8682 |
| No log | 64.0 | 128 | 0.7791 | 0.4586 | 0.7784 | 0.8823 |
| No log | 65.0 | 130 | 0.7555 | 0.4653 | 0.7548 | 0.8688 |
| No log | 66.0 | 132 | 0.7038 | 0.4853 | 0.7032 | 0.8386 |
| No log | 67.0 | 134 | 0.6463 | 0.5238 | 0.6456 | 0.8035 |
| No log | 68.0 | 136 | 0.6159 | 0.5478 | 0.6152 | 0.7843 |
| No log | 69.0 | 138 | 0.6024 | 0.5489 | 0.6017 | 0.7757 |
| No log | 70.0 | 140 | 0.6262 | 0.5157 | 0.6255 | 0.7909 |
| No log | 71.0 | 142 | 0.6246 | 0.5449 | 0.6239 | 0.7899 |
| No log | 72.0 | 144 | 0.6546 | 0.5254 | 0.6539 | 0.8087 |
| No log | 73.0 | 146 | 0.6787 | 0.5081 | 0.6780 | 0.8234 |
| No log | 74.0 | 148 | 0.6986 | 0.5024 | 0.6978 | 0.8353 |
| No log | 75.0 | 150 | 0.7237 | 0.4891 | 0.7228 | 0.8502 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
great0001/aa595591-6b4d-4774-92e3-1db4d071a909 | great0001 | "2025-02-02T02:48:56Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"region:us"
] | null | "2025-02-02T02:23:08Z" | ---
library_name: peft
license: mit
base_model: HuggingFaceH4/zephyr-7b-beta
tags:
- axolotl
- generated_from_trainer
model-index:
- name: aa595591-6b4d-4774-92e3-1db4d071a909
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: HuggingFaceH4/zephyr-7b-beta
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6f405f64993c7fcf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6f405f64993c7fcf_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/aa595591-6b4d-4774-92e3-1db4d071a909
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/6f405f64993c7fcf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 516a25da-58fa-460d-b278-5d9f47438aa1
wandb_project: Mine-SN56-20-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 516a25da-58fa-460d-b278-5d9f47438aa1
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# aa595591-6b4d-4774-92e3-1db4d071a909
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 0.0 | 0.0012 | 50 | nan |
| 0.0 | 0.0025 | 100 | nan |
| 0.0 | 0.0037 | 150 | nan |
| 0.0 | 0.0050 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
yikolyk/GPT-2-GPTQ-python-code | yikolyk | "2024-02-22T21:37:24Z" | 2 | 0 | transformers | [
"transformers",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-02-22T21:37:14Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jsaurabh/mistral_financial_finetuned | jsaurabh | "2023-10-30T06:30:11Z" | 5 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-10-30T06:28:30Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
LarryAIDraw/KobayashiDragonMaid_NDV-10 | LarryAIDraw | "2023-11-30T14:21:59Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-11-30T14:13:03Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/216171/kobayashi-kobayashi-san-chi-no-maid-dragon-or-neural-da-vinci |
Jacknjeilfy/BarfBag | Jacknjeilfy | "2023-07-05T16:18:04Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-07-05T16:18:04Z" | ---
license: creativeml-openrail-m
---
|
asdfre453/sydneysweeney | asdfre453 | "2025-03-05T22:02:04Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2025-03-05T20:45:02Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
valerielucro/Qwen2-0.5B-GRPO-VLLM-mni-epoc-64-full | valerielucro | "2025-02-22T15:14:27Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-22T15:14:06Z" | ---
base_model: Qwen/Qwen2.5-0.5B
library_name: transformers
model_name: Qwen2-0.5B-GRPO-VLLM-mni-epoc-1-full
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2-0.5B-GRPO-VLLM-mni-epoc-1-full
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="valerielucro/Qwen2-0.5B-GRPO-VLLM-mni-epoc-1-full", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.1
- Transformers: 4.49.0
- Pytorch: 2.5.1+cu121
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MayBashendy/ArabicNewSplits7_OSS_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k10_task5_organization | MayBashendy | "2025-01-16T23:31:12Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-16T23:27:09Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_OSS_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k10_task5_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_OSS_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k10_task5_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5725
- Qwk: 0.5752
- Mse: 0.5725
- Rmse: 0.7566
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.04 | 2 | 3.9274 | -0.0062 | 3.9274 | 1.9818 |
| No log | 0.08 | 4 | 2.3575 | 0.0203 | 2.3575 | 1.5354 |
| No log | 0.12 | 6 | 1.4572 | 0.0 | 1.4572 | 1.2072 |
| No log | 0.16 | 8 | 1.1583 | 0.0053 | 1.1583 | 1.0763 |
| No log | 0.2 | 10 | 1.1266 | 0.2268 | 1.1266 | 1.0614 |
| No log | 0.24 | 12 | 1.0855 | 0.2517 | 1.0855 | 1.0419 |
| No log | 0.28 | 14 | 1.0739 | 0.2340 | 1.0739 | 1.0363 |
| No log | 0.32 | 16 | 1.0917 | 0.1589 | 1.0917 | 1.0448 |
| No log | 0.36 | 18 | 1.0658 | 0.2015 | 1.0658 | 1.0324 |
| No log | 0.4 | 20 | 1.0086 | 0.2897 | 1.0086 | 1.0043 |
| No log | 0.44 | 22 | 0.9765 | 0.2192 | 0.9765 | 0.9882 |
| No log | 0.48 | 24 | 0.9708 | 0.3396 | 0.9708 | 0.9853 |
| No log | 0.52 | 26 | 0.8859 | 0.4022 | 0.8859 | 0.9412 |
| No log | 0.56 | 28 | 0.8238 | 0.4530 | 0.8238 | 0.9076 |
| No log | 0.6 | 30 | 0.7944 | 0.4101 | 0.7944 | 0.8913 |
| No log | 0.64 | 32 | 0.8115 | 0.4041 | 0.8115 | 0.9009 |
| No log | 0.68 | 34 | 0.7798 | 0.3780 | 0.7798 | 0.8831 |
| No log | 0.72 | 36 | 0.7506 | 0.5127 | 0.7506 | 0.8664 |
| No log | 0.76 | 38 | 0.7206 | 0.5156 | 0.7206 | 0.8489 |
| No log | 0.8 | 40 | 0.7070 | 0.5883 | 0.7070 | 0.8408 |
| No log | 0.84 | 42 | 0.7230 | 0.5860 | 0.7230 | 0.8503 |
| No log | 0.88 | 44 | 0.7235 | 0.5630 | 0.7235 | 0.8506 |
| No log | 0.92 | 46 | 0.7234 | 0.5630 | 0.7234 | 0.8505 |
| No log | 0.96 | 48 | 0.7530 | 0.5504 | 0.7530 | 0.8677 |
| No log | 1.0 | 50 | 0.7794 | 0.5629 | 0.7794 | 0.8828 |
| No log | 1.04 | 52 | 0.7883 | 0.4345 | 0.7883 | 0.8879 |
| No log | 1.08 | 54 | 0.7001 | 0.5863 | 0.7001 | 0.8367 |
| No log | 1.12 | 56 | 0.6327 | 0.6066 | 0.6327 | 0.7954 |
| No log | 1.16 | 58 | 0.6561 | 0.6295 | 0.6561 | 0.8100 |
| No log | 1.2 | 60 | 0.6341 | 0.5653 | 0.6341 | 0.7963 |
| No log | 1.24 | 62 | 0.6464 | 0.4938 | 0.6464 | 0.8040 |
| No log | 1.28 | 64 | 0.6384 | 0.5771 | 0.6384 | 0.7990 |
| No log | 1.32 | 66 | 0.6540 | 0.6167 | 0.6540 | 0.8087 |
| No log | 1.3600 | 68 | 0.6696 | 0.6554 | 0.6696 | 0.8183 |
| No log | 1.4 | 70 | 0.6267 | 0.5911 | 0.6267 | 0.7916 |
| No log | 1.44 | 72 | 0.6592 | 0.6195 | 0.6592 | 0.8119 |
| No log | 1.48 | 74 | 0.6591 | 0.6495 | 0.6591 | 0.8119 |
| No log | 1.52 | 76 | 0.9135 | 0.5875 | 0.9135 | 0.9557 |
| No log | 1.56 | 78 | 0.7767 | 0.6123 | 0.7767 | 0.8813 |
| No log | 1.6 | 80 | 0.6410 | 0.5478 | 0.6410 | 0.8006 |
| No log | 1.6400 | 82 | 0.7834 | 0.5951 | 0.7834 | 0.8851 |
| No log | 1.6800 | 84 | 0.7591 | 0.5928 | 0.7591 | 0.8713 |
| No log | 1.72 | 86 | 0.6321 | 0.6435 | 0.6321 | 0.7951 |
| No log | 1.76 | 88 | 0.6591 | 0.5819 | 0.6591 | 0.8119 |
| No log | 1.8 | 90 | 0.7636 | 0.5810 | 0.7636 | 0.8738 |
| No log | 1.8400 | 92 | 0.7053 | 0.5948 | 0.7053 | 0.8398 |
| No log | 1.88 | 94 | 0.6415 | 0.5921 | 0.6415 | 0.8010 |
| No log | 1.92 | 96 | 0.8079 | 0.5172 | 0.8079 | 0.8989 |
| No log | 1.96 | 98 | 0.8106 | 0.5367 | 0.8106 | 0.9003 |
| No log | 2.0 | 100 | 0.7044 | 0.5547 | 0.7044 | 0.8393 |
| No log | 2.04 | 102 | 0.6167 | 0.6148 | 0.6167 | 0.7853 |
| No log | 2.08 | 104 | 0.6347 | 0.6511 | 0.6347 | 0.7967 |
| No log | 2.12 | 106 | 0.7316 | 0.6190 | 0.7316 | 0.8553 |
| No log | 2.16 | 108 | 0.7941 | 0.5985 | 0.7941 | 0.8911 |
| No log | 2.2 | 110 | 0.7275 | 0.6019 | 0.7275 | 0.8529 |
| No log | 2.24 | 112 | 0.6239 | 0.6235 | 0.6239 | 0.7899 |
| No log | 2.2800 | 114 | 0.7105 | 0.6126 | 0.7105 | 0.8429 |
| No log | 2.32 | 116 | 0.7256 | 0.5857 | 0.7256 | 0.8518 |
| No log | 2.36 | 118 | 0.6150 | 0.6272 | 0.6150 | 0.7842 |
| No log | 2.4 | 120 | 0.5812 | 0.6412 | 0.5812 | 0.7624 |
| No log | 2.44 | 122 | 0.6901 | 0.6209 | 0.6901 | 0.8307 |
| No log | 2.48 | 124 | 0.6487 | 0.6313 | 0.6487 | 0.8054 |
| No log | 2.52 | 126 | 0.5442 | 0.6775 | 0.5442 | 0.7377 |
| No log | 2.56 | 128 | 0.5527 | 0.6677 | 0.5527 | 0.7435 |
| No log | 2.6 | 130 | 0.5221 | 0.6601 | 0.5221 | 0.7226 |
| No log | 2.64 | 132 | 0.5309 | 0.6528 | 0.5309 | 0.7286 |
| No log | 2.68 | 134 | 0.6350 | 0.6304 | 0.6350 | 0.7969 |
| No log | 2.7200 | 136 | 0.6288 | 0.5978 | 0.6288 | 0.7930 |
| No log | 2.76 | 138 | 0.5773 | 0.6035 | 0.5773 | 0.7598 |
| No log | 2.8 | 140 | 0.5672 | 0.6198 | 0.5672 | 0.7531 |
| No log | 2.84 | 142 | 0.5633 | 0.6188 | 0.5633 | 0.7505 |
| No log | 2.88 | 144 | 0.5781 | 0.6503 | 0.5781 | 0.7603 |
| No log | 2.92 | 146 | 0.5762 | 0.6760 | 0.5762 | 0.7591 |
| No log | 2.96 | 148 | 0.6226 | 0.6197 | 0.6226 | 0.7890 |
| No log | 3.0 | 150 | 0.7195 | 0.5895 | 0.7195 | 0.8483 |
| No log | 3.04 | 152 | 0.7435 | 0.5250 | 0.7435 | 0.8622 |
| No log | 3.08 | 154 | 0.7836 | 0.4568 | 0.7836 | 0.8852 |
| No log | 3.12 | 156 | 0.7419 | 0.5176 | 0.7419 | 0.8613 |
| No log | 3.16 | 158 | 0.7368 | 0.5566 | 0.7368 | 0.8584 |
| No log | 3.2 | 160 | 0.7437 | 0.6014 | 0.7437 | 0.8624 |
| No log | 3.24 | 162 | 0.6467 | 0.6328 | 0.6467 | 0.8042 |
| No log | 3.2800 | 164 | 0.6728 | 0.6305 | 0.6728 | 0.8203 |
| No log | 3.32 | 166 | 0.8053 | 0.6283 | 0.8053 | 0.8974 |
| No log | 3.36 | 168 | 0.7807 | 0.6114 | 0.7807 | 0.8835 |
| No log | 3.4 | 170 | 0.6360 | 0.6664 | 0.6360 | 0.7975 |
| No log | 3.44 | 172 | 0.6219 | 0.5622 | 0.6219 | 0.7886 |
| No log | 3.48 | 174 | 0.5980 | 0.5737 | 0.5980 | 0.7733 |
| No log | 3.52 | 176 | 0.5751 | 0.6239 | 0.5751 | 0.7584 |
| No log | 3.56 | 178 | 0.6225 | 0.6748 | 0.6225 | 0.7890 |
| No log | 3.6 | 180 | 0.5983 | 0.6865 | 0.5983 | 0.7735 |
| No log | 3.64 | 182 | 0.5692 | 0.7049 | 0.5692 | 0.7544 |
| No log | 3.68 | 184 | 0.5973 | 0.6204 | 0.5973 | 0.7728 |
| No log | 3.7200 | 186 | 0.5930 | 0.6465 | 0.5930 | 0.7701 |
| No log | 3.76 | 188 | 0.6228 | 0.6728 | 0.6228 | 0.7892 |
| No log | 3.8 | 190 | 0.7903 | 0.6199 | 0.7903 | 0.8890 |
| No log | 3.84 | 192 | 0.8823 | 0.6119 | 0.8823 | 0.9393 |
| No log | 3.88 | 194 | 0.7519 | 0.5725 | 0.7519 | 0.8672 |
| No log | 3.92 | 196 | 0.6101 | 0.5880 | 0.6101 | 0.7811 |
| No log | 3.96 | 198 | 0.6020 | 0.5701 | 0.6020 | 0.7759 |
| No log | 4.0 | 200 | 0.6139 | 0.5125 | 0.6139 | 0.7835 |
| No log | 4.04 | 202 | 0.6281 | 0.4988 | 0.6281 | 0.7925 |
| No log | 4.08 | 204 | 0.6365 | 0.4883 | 0.6365 | 0.7978 |
| No log | 4.12 | 206 | 0.6496 | 0.5534 | 0.6496 | 0.8060 |
| No log | 4.16 | 208 | 0.6129 | 0.5986 | 0.6129 | 0.7829 |
| No log | 4.2 | 210 | 0.5777 | 0.6740 | 0.5777 | 0.7601 |
| No log | 4.24 | 212 | 0.5747 | 0.6998 | 0.5747 | 0.7581 |
| No log | 4.28 | 214 | 0.5913 | 0.6732 | 0.5913 | 0.7690 |
| No log | 4.32 | 216 | 0.6202 | 0.6314 | 0.6202 | 0.7875 |
| No log | 4.36 | 218 | 0.6399 | 0.5635 | 0.6399 | 0.7999 |
| No log | 4.4 | 220 | 0.7038 | 0.5005 | 0.7038 | 0.8390 |
| No log | 4.44 | 222 | 0.6420 | 0.4962 | 0.6420 | 0.8012 |
| No log | 4.48 | 224 | 0.5779 | 0.6500 | 0.5779 | 0.7602 |
| No log | 4.52 | 226 | 0.5657 | 0.6944 | 0.5657 | 0.7521 |
| No log | 4.5600 | 228 | 0.5580 | 0.6473 | 0.5580 | 0.7470 |
| No log | 4.6 | 230 | 0.5617 | 0.6846 | 0.5617 | 0.7495 |
| No log | 4.64 | 232 | 0.5597 | 0.6297 | 0.5597 | 0.7482 |
| No log | 4.68 | 234 | 0.6023 | 0.6395 | 0.6023 | 0.7761 |
| No log | 4.72 | 236 | 0.6510 | 0.6151 | 0.6510 | 0.8068 |
| No log | 4.76 | 238 | 0.6300 | 0.5898 | 0.6300 | 0.7938 |
| No log | 4.8 | 240 | 0.5907 | 0.6314 | 0.5907 | 0.7686 |
| No log | 4.84 | 242 | 0.5780 | 0.6114 | 0.5780 | 0.7603 |
| No log | 4.88 | 244 | 0.5859 | 0.6306 | 0.5859 | 0.7655 |
| No log | 4.92 | 246 | 0.5888 | 0.6536 | 0.5888 | 0.7673 |
| No log | 4.96 | 248 | 0.6818 | 0.6301 | 0.6818 | 0.8257 |
| No log | 5.0 | 250 | 0.8329 | 0.5916 | 0.8329 | 0.9126 |
| No log | 5.04 | 252 | 0.7840 | 0.6489 | 0.7840 | 0.8854 |
| No log | 5.08 | 254 | 0.6680 | 0.6503 | 0.6680 | 0.8173 |
| No log | 5.12 | 256 | 0.5945 | 0.6297 | 0.5945 | 0.7711 |
| No log | 5.16 | 258 | 0.6461 | 0.5215 | 0.6461 | 0.8038 |
| No log | 5.2 | 260 | 0.6703 | 0.4944 | 0.6703 | 0.8187 |
| No log | 5.24 | 262 | 0.6719 | 0.5040 | 0.6719 | 0.8197 |
| No log | 5.28 | 264 | 0.6589 | 0.5146 | 0.6589 | 0.8117 |
| No log | 5.32 | 266 | 0.6432 | 0.5357 | 0.6432 | 0.8020 |
| No log | 5.36 | 268 | 0.6444 | 0.5730 | 0.6444 | 0.8028 |
| No log | 5.4 | 270 | 0.6326 | 0.6435 | 0.6326 | 0.7954 |
| No log | 5.44 | 272 | 0.6340 | 0.6511 | 0.6340 | 0.7962 |
| No log | 5.48 | 274 | 0.6311 | 0.6435 | 0.6311 | 0.7944 |
| No log | 5.52 | 276 | 0.6244 | 0.6729 | 0.6244 | 0.7902 |
| No log | 5.5600 | 278 | 0.6231 | 0.5692 | 0.6231 | 0.7893 |
| No log | 5.6 | 280 | 0.6163 | 0.5471 | 0.6163 | 0.7850 |
| No log | 5.64 | 282 | 0.6124 | 0.6407 | 0.6124 | 0.7825 |
| No log | 5.68 | 284 | 0.6062 | 0.6589 | 0.6062 | 0.7786 |
| No log | 5.72 | 286 | 0.6105 | 0.6256 | 0.6105 | 0.7814 |
| No log | 5.76 | 288 | 0.6282 | 0.5855 | 0.6282 | 0.7926 |
| No log | 5.8 | 290 | 0.6194 | 0.6177 | 0.6194 | 0.7871 |
| No log | 5.84 | 292 | 0.6046 | 0.6853 | 0.6046 | 0.7776 |
| No log | 5.88 | 294 | 0.6987 | 0.6316 | 0.6987 | 0.8359 |
| No log | 5.92 | 296 | 0.6982 | 0.6482 | 0.6982 | 0.8356 |
| No log | 5.96 | 298 | 0.6042 | 0.6561 | 0.6042 | 0.7773 |
| No log | 6.0 | 300 | 0.6794 | 0.6029 | 0.6794 | 0.8242 |
| No log | 6.04 | 302 | 0.6953 | 0.6019 | 0.6953 | 0.8339 |
| No log | 6.08 | 304 | 0.6139 | 0.5969 | 0.6139 | 0.7835 |
| No log | 6.12 | 306 | 0.6059 | 0.6134 | 0.6059 | 0.7784 |
| No log | 6.16 | 308 | 0.5995 | 0.6426 | 0.5995 | 0.7743 |
| No log | 6.2 | 310 | 0.5978 | 0.6561 | 0.5978 | 0.7732 |
| No log | 6.24 | 312 | 0.5985 | 0.6916 | 0.5985 | 0.7736 |
| No log | 6.28 | 314 | 0.5990 | 0.6748 | 0.5990 | 0.7740 |
| No log | 6.32 | 316 | 0.5977 | 0.6602 | 0.5977 | 0.7731 |
| No log | 6.36 | 318 | 0.5838 | 0.6602 | 0.5838 | 0.7641 |
| No log | 6.4 | 320 | 0.5967 | 0.6092 | 0.5967 | 0.7725 |
| No log | 6.44 | 322 | 0.5941 | 0.5986 | 0.5941 | 0.7708 |
| No log | 6.48 | 324 | 0.5854 | 0.6672 | 0.5854 | 0.7651 |
| No log | 6.52 | 326 | 0.5558 | 0.7171 | 0.5558 | 0.7455 |
| No log | 6.5600 | 328 | 0.5506 | 0.7380 | 0.5506 | 0.7420 |
| No log | 6.6 | 330 | 0.5898 | 0.7094 | 0.5898 | 0.7680 |
| No log | 6.64 | 332 | 0.6443 | 0.6310 | 0.6443 | 0.8027 |
| No log | 6.68 | 334 | 0.6407 | 0.6455 | 0.6407 | 0.8005 |
| No log | 6.72 | 336 | 0.6231 | 0.6092 | 0.6231 | 0.7894 |
| No log | 6.76 | 338 | 0.6463 | 0.5425 | 0.6463 | 0.8040 |
| No log | 6.8 | 340 | 0.6186 | 0.6014 | 0.6186 | 0.7865 |
| No log | 6.84 | 342 | 0.6005 | 0.6584 | 0.6005 | 0.7749 |
| No log | 6.88 | 344 | 0.6208 | 0.6380 | 0.6208 | 0.7879 |
| No log | 6.92 | 346 | 0.5921 | 0.6554 | 0.5921 | 0.7695 |
| No log | 6.96 | 348 | 0.5770 | 0.7404 | 0.5770 | 0.7596 |
| No log | 7.0 | 350 | 0.5714 | 0.6368 | 0.5714 | 0.7559 |
| No log | 7.04 | 352 | 0.5738 | 0.6392 | 0.5738 | 0.7575 |
| No log | 7.08 | 354 | 0.5789 | 0.6814 | 0.5789 | 0.7608 |
| No log | 7.12 | 356 | 0.6529 | 0.6230 | 0.6529 | 0.8081 |
| No log | 7.16 | 358 | 0.6664 | 0.5835 | 0.6664 | 0.8163 |
| No log | 7.2 | 360 | 0.6341 | 0.5964 | 0.6341 | 0.7963 |
| No log | 7.24 | 362 | 0.6274 | 0.5442 | 0.6274 | 0.7921 |
| No log | 7.28 | 364 | 0.6244 | 0.5536 | 0.6244 | 0.7902 |
| No log | 7.32 | 366 | 0.6116 | 0.5536 | 0.6116 | 0.7820 |
| No log | 7.36 | 368 | 0.5978 | 0.6380 | 0.5978 | 0.7732 |
| No log | 7.4 | 370 | 0.6112 | 0.6617 | 0.6112 | 0.7818 |
| No log | 7.44 | 372 | 0.5928 | 0.6894 | 0.5928 | 0.7699 |
| No log | 7.48 | 374 | 0.5820 | 0.6650 | 0.5820 | 0.7629 |
| No log | 7.52 | 376 | 0.6038 | 0.6468 | 0.6038 | 0.7770 |
| No log | 7.5600 | 378 | 0.6075 | 0.6544 | 0.6075 | 0.7794 |
| No log | 7.6 | 380 | 0.6092 | 0.5902 | 0.6092 | 0.7805 |
| No log | 7.64 | 382 | 0.6007 | 0.6545 | 0.6007 | 0.7751 |
| No log | 7.68 | 384 | 0.6030 | 0.6085 | 0.6030 | 0.7766 |
| No log | 7.72 | 386 | 0.6109 | 0.5774 | 0.6109 | 0.7816 |
| No log | 7.76 | 388 | 0.6559 | 0.6272 | 0.6559 | 0.8099 |
| No log | 7.8 | 390 | 0.7324 | 0.5766 | 0.7324 | 0.8558 |
| No log | 7.84 | 392 | 0.7048 | 0.5572 | 0.7048 | 0.8395 |
| No log | 7.88 | 394 | 0.6337 | 0.6291 | 0.6337 | 0.7960 |
| No log | 7.92 | 396 | 0.6072 | 0.6275 | 0.6072 | 0.7793 |
| No log | 7.96 | 398 | 0.5905 | 0.6284 | 0.5905 | 0.7684 |
| No log | 8.0 | 400 | 0.5877 | 0.6545 | 0.5877 | 0.7666 |
| No log | 8.04 | 402 | 0.6100 | 0.6335 | 0.6100 | 0.7810 |
| No log | 8.08 | 404 | 0.6369 | 0.6640 | 0.6369 | 0.7981 |
| No log | 8.12 | 406 | 0.6566 | 0.6151 | 0.6566 | 0.8103 |
| No log | 8.16 | 408 | 0.6073 | 0.6656 | 0.6073 | 0.7793 |
| No log | 8.2 | 410 | 0.6074 | 0.6297 | 0.6074 | 0.7794 |
| No log | 8.24 | 412 | 0.6067 | 0.5898 | 0.6067 | 0.7789 |
| No log | 8.28 | 414 | 0.6064 | 0.6032 | 0.6064 | 0.7787 |
| No log | 8.32 | 416 | 0.6013 | 0.6407 | 0.6013 | 0.7754 |
| No log | 8.36 | 418 | 0.6515 | 0.6240 | 0.6515 | 0.8072 |
| No log | 8.4 | 420 | 0.6775 | 0.5864 | 0.6775 | 0.8231 |
| No log | 8.44 | 422 | 0.6477 | 0.6282 | 0.6477 | 0.8048 |
| No log | 8.48 | 424 | 0.6258 | 0.6068 | 0.6258 | 0.7911 |
| No log | 8.52 | 426 | 0.6318 | 0.6068 | 0.6318 | 0.7948 |
| No log | 8.56 | 428 | 0.6401 | 0.6491 | 0.6401 | 0.8000 |
| No log | 8.6 | 430 | 0.6482 | 0.6392 | 0.6482 | 0.8051 |
| No log | 8.64 | 432 | 0.6698 | 0.6050 | 0.6698 | 0.8184 |
| No log | 8.68 | 434 | 0.7438 | 0.5830 | 0.7438 | 0.8625 |
| No log | 8.72 | 436 | 0.7641 | 0.5915 | 0.7641 | 0.8741 |
| No log | 8.76 | 438 | 0.6929 | 0.6167 | 0.6929 | 0.8324 |
| No log | 8.8 | 440 | 0.6535 | 0.5475 | 0.6535 | 0.8084 |
| No log | 8.84 | 442 | 0.6517 | 0.5415 | 0.6517 | 0.8073 |
| No log | 8.88 | 444 | 0.6293 | 0.6001 | 0.6293 | 0.7933 |
| No log | 8.92 | 446 | 0.6060 | 0.6500 | 0.6060 | 0.7785 |
| No log | 8.96 | 448 | 0.6192 | 0.6872 | 0.6192 | 0.7869 |
| No log | 9.0 | 450 | 0.6245 | 0.6732 | 0.6245 | 0.7902 |
| No log | 9.04 | 452 | 0.6063 | 0.6672 | 0.6063 | 0.7786 |
| No log | 9.08 | 454 | 0.5999 | 0.6383 | 0.5999 | 0.7745 |
| No log | 9.12 | 456 | 0.6055 | 0.6383 | 0.6055 | 0.7781 |
| No log | 9.16 | 458 | 0.6367 | 0.6269 | 0.6367 | 0.7979 |
| No log | 9.2 | 460 | 0.6675 | 0.6063 | 0.6675 | 0.8170 |
| No log | 9.24 | 462 | 0.6900 | 0.5206 | 0.6900 | 0.8306 |
| No log | 9.28 | 464 | 0.6893 | 0.4494 | 0.6893 | 0.8303 |
| No log | 9.32 | 466 | 0.7056 | 0.4975 | 0.7056 | 0.8400 |
| No log | 9.36 | 468 | 0.7250 | 0.4505 | 0.7250 | 0.8515 |
| No log | 9.4 | 470 | 0.7068 | 0.4210 | 0.7068 | 0.8407 |
| No log | 9.44 | 472 | 0.6791 | 0.4804 | 0.6791 | 0.8241 |
| No log | 9.48 | 474 | 0.6952 | 0.5425 | 0.6952 | 0.8338 |
| No log | 9.52 | 476 | 0.7844 | 0.5756 | 0.7844 | 0.8857 |
| No log | 9.56 | 478 | 0.7957 | 0.5210 | 0.7957 | 0.8920 |
| No log | 9.6 | 480 | 0.7617 | 0.5527 | 0.7617 | 0.8728 |
| No log | 9.64 | 482 | 0.6856 | 0.6356 | 0.6856 | 0.8280 |
| No log | 9.68 | 484 | 0.6689 | 0.6356 | 0.6689 | 0.8179 |
| No log | 9.72 | 486 | 0.6709 | 0.6028 | 0.6709 | 0.8191 |
| No log | 9.76 | 488 | 0.6941 | 0.6160 | 0.6941 | 0.8331 |
| No log | 9.8 | 490 | 0.6648 | 0.6228 | 0.6648 | 0.8154 |
| No log | 9.84 | 492 | 0.6353 | 0.6347 | 0.6353 | 0.7971 |
| No log | 9.88 | 494 | 0.6149 | 0.6085 | 0.6149 | 0.7841 |
| No log | 9.92 | 496 | 0.6021 | 0.6095 | 0.6021 | 0.7760 |
| No log | 9.96 | 498 | 0.6174 | 0.6284 | 0.6174 | 0.7858 |
| 0.2665 | 10.0 | 500 | 0.6192 | 0.6380 | 0.6192 | 0.7869 |
| 0.2665 | 10.04 | 502 | 0.6758 | 0.6097 | 0.6758 | 0.8221 |
| 0.2665 | 10.08 | 504 | 0.6792 | 0.6333 | 0.6792 | 0.8241 |
| 0.2665 | 10.12 | 506 | 0.5996 | 0.6536 | 0.5996 | 0.7743 |
| 0.2665 | 10.16 | 508 | 0.5715 | 0.6614 | 0.5715 | 0.7560 |
| 0.2665 | 10.2 | 510 | 0.5743 | 0.6659 | 0.5743 | 0.7578 |
| 0.2665 | 10.24 | 512 | 0.5758 | 0.6406 | 0.5758 | 0.7588 |
| 0.2665 | 10.28 | 514 | 0.6404 | 0.6564 | 0.6404 | 0.8002 |
| 0.2665 | 10.32 | 516 | 0.6959 | 0.6546 | 0.6959 | 0.8342 |
| 0.2665 | 10.36 | 518 | 0.7138 | 0.6446 | 0.7138 | 0.8448 |
| 0.2665 | 10.4 | 520 | 0.6062 | 0.6838 | 0.6062 | 0.7786 |
| 0.2665 | 10.44 | 522 | 0.5579 | 0.6553 | 0.5579 | 0.7469 |
| 0.2665 | 10.48 | 524 | 0.5630 | 0.6115 | 0.5630 | 0.7503 |
| 0.2665 | 10.52 | 526 | 0.5536 | 0.6499 | 0.5536 | 0.7441 |
| 0.2665 | 10.56 | 528 | 0.5529 | 0.6732 | 0.5529 | 0.7436 |
| 0.2665 | 10.6 | 530 | 0.5599 | 0.6396 | 0.5599 | 0.7483 |
| 0.2665 | 10.64 | 532 | 0.5574 | 0.6687 | 0.5574 | 0.7466 |
| 0.2665 | 10.68 | 534 | 0.5396 | 0.6545 | 0.5396 | 0.7346 |
| 0.2665 | 10.72 | 536 | 0.5409 | 0.6389 | 0.5409 | 0.7355 |
| 0.2665 | 10.76 | 538 | 0.5472 | 0.6704 | 0.5472 | 0.7397 |
| 0.2665 | 10.8 | 540 | 0.5730 | 0.6167 | 0.5730 | 0.7570 |
| 0.2665 | 10.84 | 542 | 0.5695 | 0.6199 | 0.5695 | 0.7547 |
| 0.2665 | 10.88 | 544 | 0.5710 | 0.6748 | 0.5710 | 0.7556 |
| 0.2665 | 10.92 | 546 | 0.5597 | 0.6973 | 0.5597 | 0.7481 |
| 0.2665 | 10.96 | 548 | 0.5377 | 0.6464 | 0.5377 | 0.7333 |
| 0.2665 | 11.0 | 550 | 0.5357 | 0.6634 | 0.5357 | 0.7319 |
| 0.2665 | 11.04 | 552 | 0.5255 | 0.6581 | 0.5255 | 0.7249 |
| 0.2665 | 11.08 | 554 | 0.5267 | 0.6589 | 0.5267 | 0.7258 |
| 0.2665 | 11.12 | 556 | 0.5313 | 0.6770 | 0.5313 | 0.7289 |
| 0.2665 | 11.16 | 558 | 0.5506 | 0.6655 | 0.5506 | 0.7420 |
| 0.2665 | 11.2 | 560 | 0.6005 | 0.6485 | 0.6005 | 0.7749 |
| 0.2665 | 11.24 | 562 | 0.6184 | 0.6485 | 0.6184 | 0.7864 |
| 0.2665 | 11.28 | 564 | 0.6421 | 0.6377 | 0.6421 | 0.8013 |
| 0.2665 | 11.32 | 566 | 0.6060 | 0.6485 | 0.6060 | 0.7784 |
| 0.2665 | 11.36 | 568 | 0.5441 | 0.6291 | 0.5441 | 0.7376 |
| 0.2665 | 11.4 | 570 | 0.5359 | 0.6880 | 0.5359 | 0.7321 |
| 0.2665 | 11.44 | 572 | 0.5445 | 0.6602 | 0.5445 | 0.7379 |
| 0.2665 | 11.48 | 574 | 0.5736 | 0.5678 | 0.5736 | 0.7573 |
| 0.2665 | 11.52 | 576 | 0.5952 | 0.5654 | 0.5952 | 0.7715 |
| 0.2665 | 11.56 | 578 | 0.5962 | 0.5654 | 0.5962 | 0.7721 |
| 0.2665 | 11.6 | 580 | 0.5821 | 0.5540 | 0.5821 | 0.7629 |
| 0.2665 | 11.64 | 582 | 0.5725 | 0.5752 | 0.5725 | 0.7566 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Aratako/DeepSeek-R1-Distill-Qwen-32B-Japanese-AWQ | Aratako | "2025-01-27T18:03:32Z" | 193 | 0 | null | [
"safetensors",
"qwen2",
"ja",
"base_model:cyberagent/DeepSeek-R1-Distill-Qwen-32B-Japanese",
"base_model:quantized:cyberagent/DeepSeek-R1-Distill-Qwen-32B-Japanese",
"license:mit",
"4-bit",
"awq",
"region:us"
] | null | "2025-01-27T17:53:37Z" | ---
license: mit
language:
- ja
base_model:
- cyberagent/DeepSeek-R1-Distill-Qwen-32B-Japanese
---
# DeepSeek-R1-Distill-Qwen-32B-Japanese-AWQ
cyberagentによる[deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B)の日本語追加学習モデルである[cyberagent/DeepSeek-R1-Distill-Qwen-32B-Japanese](https://huggingface.co/cyberagent/DeepSeek-R1-Distill-Qwen-32B-Japanese)のAWQ量子化版です。
Calibrationデータには[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を利用しています。
その他の詳細については元モデルを参照してください。 |
Zeedon/ij | Zeedon | "2025-03-01T09:55:13Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-03-01T09:25:35Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ij
---
# Ij
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ij` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Zeedon/ij', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
chrisob94/llama-3.1-8b-dissertation-doc-chat-experiment-10-fold | chrisob94 | "2024-08-03T15:52:08Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-08-03T15:45:08Z" | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** chrisob94
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sakethchalla/my_isl_model | sakethchalla | "2023-04-09T10:42:12Z" | 220 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-04-09T08:07:42Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: my_isl_model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7283950617283951
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_isl_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9092
- Accuracy: 0.7284
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.1292 | 0.96 | 11 | 3.0256 | 0.2593 |
| 2.9426 | 2.0 | 23 | 2.7796 | 0.2716 |
| 2.706 | 2.96 | 34 | 2.5462 | 0.4321 |
| 2.5389 | 4.0 | 46 | 2.4454 | 0.4568 |
| 2.3638 | 4.96 | 57 | 2.2169 | 0.6914 |
| 2.1862 | 6.0 | 69 | 2.1349 | 0.6296 |
| 2.0459 | 6.96 | 80 | 2.1135 | 0.6049 |
| 1.9912 | 8.0 | 92 | 1.9757 | 0.7531 |
| 1.9504 | 8.96 | 103 | 1.9073 | 0.7407 |
| 1.942 | 9.57 | 110 | 1.9092 | 0.7284 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
cackerman/ft_0to31_interleaved_both8pluscrazy41_selfrec16scaled0to30_mult0.1 | cackerman | "2025-03-15T06:56:29Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-15T06:52:33Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/phi3_GermanCredit_cfda_16ep_66_newversion | MinaMila | "2025-03-19T05:12:43Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:finetune:unsloth/Phi-3.5-mini-instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-19T05:10:01Z" | ---
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3.5-mini-instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
derekbsnider/Llama-3.2-3B-Instruct-Q4_K_M-GGUF | derekbsnider | "2025-01-16T04:33:42Z" | 9 | 0 | transformers | [
"transformers",
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-01-16T04:33:29Z" | ---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-cpp
- gguf-my-repo
license: llama3.2
extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\
\ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\
\ for use, reproduction, distribution and modification of the Llama Materials set\
\ forth herein.\n\n“Documentation” means the specifications, manuals and documentation\
\ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\n“Licensee” or “you” means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf),\
\ of the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2”\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means,\
\ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\
\ you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept”\
\ below or by using or distributing any portion or element of the Llama Materials,\
\ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Meta’s intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\
\ copy, create derivative works of, and make modifications to the Llama Materials.\
\ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\
\ Materials (or any derivative works thereof), or a product or service (including\
\ another AI model) that contains any of them, you shall (A) provide a copy of this\
\ Agreement with any such Llama Materials; and (B) prominently display “Built with\
\ Llama” on a related website, user interface, blogpost, about page, or product\
\ documentation. If you use the Llama Materials or any outputs or results of the\
\ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\
\ which is distributed or made available, you shall also include “Llama” at the\
\ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\
\ derivative works thereof, from a Licensee as part of an integrated end user product,\
\ then Section 2 of this Agreement will not apply to you. \niii. You must retain\
\ in all copies of the Llama Materials that you distribute the following attribution\
\ notice within a “Notice” text file distributed as a part of such copies: “Llama\
\ 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\
\ version release date, the monthly active users of the products or services made\
\ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\
\ monthly active users in the preceding calendar month, you must request a license\
\ from Meta, which Meta may grant to you in its sole discretion, and you are not\
\ authorized to exercise any of the rights under this Agreement unless or until\
\ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\
\ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\
\ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\
\ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\
\ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\
\ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\
\ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\
\ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\
\ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\
\ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\
\ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\
\ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\
\ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\
a. No trademark licenses are granted under this Agreement, and in connection with\
\ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\
\ by or associated with the other or any of its affiliates, except as required\
\ for reasonable and customary use in describing and redistributing the Llama Materials\
\ or as set forth in this Section 5(a). Meta hereby grants you a license to use\
\ “Llama” (the “Mark”) solely as required to comply with the last sentence of Section\
\ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\
\ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\
\ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\
\ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\
\ respect to any derivative works and modifications of the Llama Materials that\
\ are made by you, as between you and Meta, you are and will be the owner of such\
\ derivative works and modifications.\nc. If you institute litigation or other proceedings\
\ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\
\ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\
\ of any of the foregoing, constitutes infringement of intellectual property or\
\ other rights owned or licensable by you, then any licenses granted to you under\
\ this Agreement shall terminate as of the date such litigation or claim is filed\
\ or instituted. You will indemnify and hold harmless Meta from and against any\
\ claim by any third party arising out of or related to your use or distribution\
\ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\
\ commence upon your acceptance of this Agreement or access to the Llama Materials\
\ and will continue in full force and effect until terminated in accordance with\
\ the terms and conditions herein. Meta may terminate this Agreement if you are\
\ in breach of any term or condition of this Agreement. Upon termination of this\
\ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\
\ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\
\ Jurisdiction. This Agreement will be governed and construed under the laws of\
\ the State of California without regard to choice of law principles, and the UN\
\ Convention on Contracts for the International Sale of Goods does not apply to\
\ this Agreement. The courts of California shall have exclusive jurisdiction of\
\ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\
\ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 3.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\
\ information about individuals, including information about individuals’ identity,\
\ health, or demographic information, unless you have obtained the right to do so\
\ in accordance with applicable law\n 5. Engage in or facilitate any action or\
\ generate any content that infringes, misappropriates, or otherwise violates any\
\ third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 6. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n 7. Engage in any action, or\
\ facilitate any action, to intentionally circumvent or remove usage restrictions\
\ or other safety measures, or to enable functionality disabled by Meta \n2. Engage\
\ in, promote, incite, facilitate, or assist in the planning or development of activities\
\ that present a risk of death or bodily harm to individuals, including use of Llama\
\ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\
\ applications, espionage, use for materials or activities that are subject to the\
\ International Traffic Arms Regulations (ITAR) maintained by the United States\
\ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\
\ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\
\ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\
\ substances\n 11. Operation of critical infrastructure, transportation technologies,\
\ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\
\ and eating disorders\n 13. Any content intended to incite or promote violence,\
\ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\
\ or mislead others, including use of Llama 3.2 related to the following:\n 14.\
\ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\
\ 15. Generating, promoting, or furthering defamatory content, including the\
\ creation of defamatory statements, images, or other content\n 16. Generating,\
\ promoting, or further distributing spam\n 17. Impersonating another individual\
\ without consent, authorization, or legal right\n 18. Representing that the\
\ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\
\ false online engagement, including fake reviews and other means of fake online\
\ engagement \n4. Fail to appropriately disclose to end users any known dangers\
\ of your AI system 5. Interact with third party tools, models, or software designed\
\ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\
\ that the outputs of such tools, models, or software are associated with Meta or\
\ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\
\ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\
\ are not being granted to you if you are an individual domiciled in, or a company\
\ with a principal place of business in, the European Union. This restriction does\
\ not apply to end users of a product or service that incorporates any such multimodal\
\ models.\n\nPlease report any violation of this Policy, software “bug,” or other\
\ problems that could lead to a violation of this Policy through one of the following\
\ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\
\ 3.2: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
base_model: meta-llama/Llama-3.2-3B-Instruct
---
# derekbsnider/Llama-3.2-3B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`meta-llama/Llama-3.2-3B-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo derekbsnider/Llama-3.2-3B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo derekbsnider/Llama-3.2-3B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo derekbsnider/Llama-3.2-3B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo derekbsnider/Llama-3.2-3B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-q4_k_m.gguf -c 2048
```
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t300_e20_member_shadow30 | FounderOfHuggingface | "2023-12-05T13:40:57Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2023-12-05T13:40:55Z" | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
Mungert/QwQ-32B-GGUF | Mungert | "2025-04-09T22:43:49Z" | 475 | 3 | transformers | [
"transformers",
"gguf",
"chat",
"text-generation",
"en",
"arxiv:2309.00071",
"arxiv:2412.15115",
"base_model:Qwen/Qwen2.5-32B",
"base_model:quantized:Qwen/Qwen2.5-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | "2025-04-04T22:21:28Z" | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/QWQ-32B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-32B
tags:
- chat
library_name: transformers
---
# <span style="color: #7FFF7F;">QwQ-32B GGUF Models</span>
## <span style="color: #7FFF7F;">Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)</span>
Our latest quantization method introduces **precision-adaptive quantization** for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on **Llama-3-8B**. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency.
### **Benchmark Context**
All tests conducted on **Llama-3-8B-Instruct** using:
- Standard perplexity evaluation pipeline
- 2048-token context window
- Same prompt set across all quantizations
### **Method**
- **Dynamic Precision Allocation**:
- First/Last 25% of layers → IQ4_XS (selected layers)
- Middle 50% → IQ2_XXS/IQ3_S (increase efficiency)
- **Critical Component Protection**:
- Embeddings/output layers use Q5_K
- Reduces error propagation by 38% vs standard 1-2bit
### **Quantization Performance Comparison (Llama-3-8B)**
| Quantization | Standard PPL | DynamicGate PPL | Δ PPL | Std Size | DG Size | Δ Size | Std Speed | DG Speed |
|--------------|--------------|------------------|---------|----------|---------|--------|-----------|----------|
| IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s |
| IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s |
| IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s |
| IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s |
| IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s |
**Key**:
- PPL = Perplexity (lower is better)
- Δ PPL = Percentage change from standard to DynamicGate
- Speed = Inference time (CPU avx2, 2048 token context)
- Size differences reflect mixed quantization overhead
**Key Improvements:**
- 🔥 **IQ1_M** shows massive 43.9% perplexity reduction (27.46 → 15.41)
- 🚀 **IQ2_S** cuts perplexity by 36.9% while adding only 0.2GB
- ⚡ **IQ1_S** maintains 39.7% better accuracy despite 1-bit quantization
**Tradeoffs:**
- All variants have modest size increases (0.1-0.3GB)
- Inference speeds remain comparable (<5% difference)
### **When to Use These Models**
📌 **Fitting models into GPU VRAM**
✔ **Memory-constrained deployments**
✔ **Cpu and Edge Devices** where 1-2bit errors can be tolerated
✔ **Research** into ultra-low-bit quantization
## **Choosing the Right Model Format**
Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**.
### **BF16 (Brain Float 16) – Use if BF16 acceleration is available**
- A 16-bit floating-point format designed for **faster computation** while retaining good precision.
- Provides **similar dynamic range** as FP32 but with **lower memory usage**.
- Recommended if your hardware supports **BF16 acceleration** (check your device's specs).
- Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32.
📌 **Use BF16 if:**
✔ Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs).
✔ You want **higher precision** while saving memory.
✔ You plan to **requantize** the model into another format.
📌 **Avoid BF16 if:**
❌ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower).
❌ You need compatibility with older devices that lack BF16 optimization.
---
### **F16 (Float 16) – More widely supported than BF16**
- A 16-bit floating-point **high precision** but with less of range of values than BF16.
- Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs).
- Slightly lower numerical precision than BF16 but generally sufficient for inference.
📌 **Use F16 if:**
✔ Your hardware supports **FP16** but **not BF16**.
✔ You need a **balance between speed, memory usage, and accuracy**.
✔ You are running on a **GPU** or another device optimized for FP16 computations.
📌 **Avoid F16 if:**
❌ Your device lacks **native FP16 support** (it may run slower than expected).
❌ You have memory limitations.
---
### **Quantized Models (Q4_K, Q6_K, Q8, etc.) – For CPU & Low-VRAM Inference**
Quantization reduces model size and memory usage while maintaining as much accuracy as possible.
- **Lower-bit models (Q4_K)** → **Best for minimal memory usage**, may have lower precision.
- **Higher-bit models (Q6_K, Q8_0)** → **Better accuracy**, requires more memory.
📌 **Use Quantized Models if:**
✔ You are running inference on a **CPU** and need an optimized model.
✔ Your device has **low VRAM** and cannot load full-precision models.
✔ You want to reduce **memory footprint** while keeping reasonable accuracy.
📌 **Avoid Quantized Models if:**
❌ You need **maximum accuracy** (full-precision models are better for this).
❌ Your hardware has enough VRAM for higher-precision formats (BF16/F16).
---
### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)**
These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint.
- **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**.
- **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large.
- **Trade-off**: Lower accuracy compared to higher-bit quantizations.
- **IQ3_S**: Small block size for **maximum memory efficiency**.
- **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive.
- **IQ3_M**: Medium block size for better accuracy than **IQ3_S**.
- **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting.
- **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy.
- **Use case**: Best for **low-memory devices** where **Q6_K** is too large.
- **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**.
- **Use case**: Best for **ARM-based devices** or **low-memory environments**.
---
### **Summary Table: Model Format Selection**
| Model Format | Precision | Memory Usage | Device Requirements | Best Use Case |
|--------------|------------|---------------|----------------------|---------------|
| **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory |
| **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available |
| **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments |
| **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized |
| **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models |
| **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy |
| **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices |
---
## **Included Files & Details**
### `QwQ-32B-bf16.gguf`
- Model weights preserved in **BF16**.
- Use this if you want to **requantize** the model into a different format.
- Best if your device supports **BF16 acceleration**.
### `QwQ-32B-f16.gguf`
- Model weights stored in **F16**.
- Use if your device supports **FP16**, especially if BF16 is not available.
### `QwQ-32B-bf16-q8_0.gguf`
- **Output & embeddings** remain in **BF16**.
- All other layers quantized to **Q8_0**.
- Use if your device supports **BF16** and you want a quantized version.
### `QwQ-32B-f16-q8_0.gguf`
- **Output & embeddings** remain in **F16**.
- All other layers quantized to **Q8_0**.
### `QwQ-32B-q4_k.gguf`
- **Output & embeddings** quantized to **Q8_0**.
- All other layers quantized to **Q4_K**.
- Good for **CPU inference** with limited memory.
### `QwQ-32B-q4_k_s.gguf`
- Smallest **Q4_K** variant, using less memory at the cost of accuracy.
- Best for **very low-memory setups**.
### `QwQ-32B-q6_k.gguf`
- **Output & embeddings** quantized to **Q8_0**.
- All other layers quantized to **Q6_K** .
### `QwQ-32B-q8_0.gguf`
- Fully **Q8** quantized model for better accuracy.
- Requires **more memory** but offers higher precision.
### `QwQ-32B-iq3_xs.gguf`
- **IQ3_XS** quantization, optimized for **extreme memory efficiency**.
- Best for **ultra-low-memory devices**.
### `QwQ-32B-iq3_m.gguf`
- **IQ3_M** quantization, offering a **medium block size** for better accuracy.
- Suitable for **low-memory devices**.
### `QwQ-32B-q4_0.gguf`
- Pure **Q4_0** quantization, optimized for **ARM devices**.
- Best for **low-memory environments**.
- Prefer IQ4_NL for better accuracy.
# <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
❤ **Please click "Like" if you find this useful!**
Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**:
👉 [Free Network Monitor](https://freenetworkmonitor.click/dashboard)
💬 **How to test**:
1. Click the **chat icon** (bottom right on any page)
2. Choose an **AI assistant type**:
- `TurboLLM` (GPT-4-mini)
- `FreeLLM` (Open-source)
- `TestLLM` (Experimental CPU-only)
### **What I’m Testing**
I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
- **Function calling** against live network services
- **How small can a model go** while still handling:
- Automated **Nmap scans**
- **Quantum-readiness checks**
- **Metasploit integration**
🟡 **TestLLM** – Current experimental model (llama.cpp on 6 CPU threads):
- ✅ **Zero-configuration setup**
- ⏳ 30s load time (slow inference but **no API costs**)
- 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
### **Other Assistants**
🟢 **TurboLLM** – Uses **gpt-4-mini** for:
- **Real-time network diagnostics**
- **Automated penetration testing** (Nmap/Metasploit)
- 🔑 Get more tokens by [downloading our Free Network Monitor Agent](https://freenetworkmonitor.click/download)
🔵 **HugLLM** – Open-source models (≈8B params):
- **2x more tokens** than TurboLLM
- **AI-powered log analysis**
- 🌐 Runs on Hugging Face Inference API
### 💡 **Example AI Commands to Test**:
1. `"Give me info on my websites SSL certificate"`
2. `"Check if my server is using quantum safe encyption for communication"`
3. `"Run a quick Nmap vulnerability test"`
# QwQ-32B
<a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Introduction
QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini.
<p align="center">
<img width="100%" src="figures/benchmark.jpg">
</p>
**This repo contains the QwQ 32B model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training (Supervised Finetuning and Reinforcement Learning)
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 32.5B
- Number of Paramaters (Non-Embedding): 31.0B
- Number of Layers: 64
- Number of Attention Heads (GQA): 40 for Q and 8 for KV
- Context Length: Full 131,072 tokens
- For prompts exceeding 8,192 tokens in length, you must enable YaRN as outlined in [this section](#usage-guidelines).
**Note:** For the best experience, please review the [usage guidelines](#usage-guidelines) before deploying QwQ models.
You can try our [demo](https://huggingface.co/spaces/Qwen/QwQ-32B-Demo) or access QwQ models via [QwenChat](https://chat.qwen.ai).
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwq-32b/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
QwQ is based on Qwen2.5, whose code has been in the latest Hugging face `transformers`. We advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/QwQ-32B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many r's are in the word \"strawberry\""
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
### Usage Guidelines
To achieve optimal performance, we recommend the following settings:
1. **Enforce Thoughtful Output**: Ensure the model starts with "\<think\>\n" to prevent generating empty thinking content, which can degrade output quality. If you use `apply_chat_template` and set `add_generation_prompt=True`, this is already automatically implemented, but it may cause the response to lack the \<think\> tag at the beginning. This is normal behavior.
2. **Sampling Parameters**:
- Use Temperature=0.6, TopP=0.95, MinP=0 instead of Greedy decoding to avoid endless repetitions.
- Use TopK between 20 and 40 to filter out rare token occurrences while maintaining the diversity of the generated output.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may result in occasional language mixing and a slight decrease in performance.
3. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. This feature is already implemented in `apply_chat_template`.
4. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g.,`\"answer\": \"C\"`." in the prompt.
5. **Handle Long Inputs**: For inputs exceeding 8,192 tokens, enable [YaRN](https://arxiv.org/abs/2309.00071) to improve the model's ability to capture long-sequence information effectively.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM. Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwq-32b/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwq32b,
title = {QwQ-32B: Embracing the Power of Reinforcement Learning},
url = {https://qwenlm.github.io/blog/qwq-32b/},
author = {Qwen Team},
month = {March},
year = {2025}
}
@article{qwen2.5,
title={Qwen2.5 Technical Report},
author={An Yang and Baosong Yang and Beichen Zhang and Binyuan Hui and Bo Zheng and Bowen Yu and Chengyuan Li and Dayiheng Liu and Fei Huang and Haoran Wei and Huan Lin and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Yang and Jiaxi Yang and Jingren Zhou and Junyang Lin and Kai Dang and Keming Lu and Keqin Bao and Kexin Yang and Le Yu and Mei Li and Mingfeng Xue and Pei Zhang and Qin Zhu and Rui Men and Runji Lin and Tianhao Li and Tianyi Tang and Tingyu Xia and Xingzhang Ren and Xuancheng Ren and Yang Fan and Yang Su and Yichang Zhang and Yu Wan and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zihan Qiu},
journal={arXiv preprint arXiv:2412.15115},
year={2024}
}
``` |
AumBarai/Pyramids_Training-1 | AumBarai | "2024-02-28T13:39:39Z" | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | "2024-02-28T13:39:27Z" | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: AumBarai/Pyramids_Training-1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 | stefan-it | "2023-10-26T11:07:12Z" | 4 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-64k-td-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-64k-td-cased",
"license:mit",
"region:us"
] | token-classification | "2023-10-24T09:26:51Z" | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-64k-td-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT 64k as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|------------------|--------------|--------------|--------------|--------------|-----------------|
| `bs8-e10-lr3e-05` | [0.8389][1] | [0.8466][2] | [0.8299][3] | [0.8391][4] | [0.8427][5] | 0.8394 ± 0.0062 |
| `bs4-e10-lr3e-05` | [0.8279][6] | [0.8364][7] | [0.8404][8] | [0.8382][9] | [0.8371][10] | 0.836 ± 0.0048 |
| `bs8-e10-lr5e-05` | [**0.8418**][11] | [0.8337][12] | [0.831][13] | [0.8346][14] | [0.8352][15] | 0.8353 ± 0.004 |
| `bs4-e10-lr5e-05` | [0.831][16] | [0.8239][17] | [0.7784][18] | [0.8313][19] | [0.8191][20] | 0.8167 ± 0.022 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
formu/DR-Site | formu | "2021-03-26T15:34:21Z" | 0 | 0 | null | [
"region:us"
] | null | "2022-03-02T23:29:05Z" | https://www.geogebra.org/m/w8uzjttg
https://www.geogebra.org/m/gvn7m78g
https://www.geogebra.org/m/arxecanq
https://www.geogebra.org/m/xb69bvww
https://www.geogebra.org/m/apvepfnd
https://www.geogebra.org/m/evmj8ckk
https://www.geogebra.org/m/qxcxwmhp
https://www.geogebra.org/m/p3cxqh6c
https://www.geogebra.org/m/ggrahbgd
https://www.geogebra.org/m/pnhymrbc
https://www.geogebra.org/m/zjukbtk9
https://www.geogebra.org/m/bbezun8r
https://www.geogebra.org/m/sgwamtru
https://www.geogebra.org/m/fpunkxxp
https://www.geogebra.org/m/acxebrr7 |
nghiatrannnnnn/10006337-b46f-4957-9e3a-669a34a5da30 | nghiatrannnnnn | "2025-01-28T08:09:55Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:jingyeom/seal3.1.6n_7b",
"base_model:adapter:jingyeom/seal3.1.6n_7b",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-28T07:48:10Z" | ---
library_name: peft
base_model: jingyeom/seal3.1.6n_7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 10006337-b46f-4957-9e3a-669a34a5da30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: jingyeom/seal3.1.6n_7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fdff50ee15fcf425_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fdff50ee15fcf425_train_data.json
type:
field_input: ''
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nghiatrannnnnn/10006337-b46f-4957-9e3a-669a34a5da30
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/fdff50ee15fcf425_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 92f98e2d-1eec-4b9e-be85-14d955a8da37
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 92f98e2d-1eec-4b9e-be85-14d955a8da37
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 10006337-b46f-4957-9e3a-669a34a5da30
This model is a fine-tuned version of [jingyeom/seal3.1.6n_7b](https://huggingface.co/jingyeom/seal3.1.6n_7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7566 | 0.0946 | 200 | 0.7012 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dsfdsf2/distilbert-base-uncased-finetuned-squad | dsfdsf2 | "2024-04-26T04:54:13Z" | 61 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2024-04-25T12:48:32Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: dsfdsf2/distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dsfdsf2/distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9640
- Train End Logits Accuracy: 0.7317
- Train Start Logits Accuracy: 0.6920
- Validation Loss: 1.1190
- Validation End Logits Accuracy: 0.6979
- Validation Start Logits Accuracy: 0.6640
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.4963 | 0.6099 | 0.5713 | 1.1677 | 0.6843 | 0.6492 | 0 |
| 0.9640 | 0.7317 | 0.6920 | 1.1190 | 0.6979 | 0.6640 | 1 |
### Framework versions
- Transformers 4.40.1
- TensorFlow 2.16.1
- Datasets 2.19.0
- Tokenizers 0.19.1
|
DavideTHU/corgy_shoes_LoRA | DavideTHU | "2024-01-03T06:03:59Z" | 1 | 1 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2024-01-03T06:03:52Z" |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK dog
license: openrail++
---
# SDXL LoRA DreamBooth - DavideTHU/corgy_shoes_LoRA
<Gallery />
## Model description
These are DavideTHU/corgy_shoes_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](DavideTHU/corgy_shoes_LoRA/tree/main) them in the Files & versions tab.
|
jxm/cde-small-v2 | jxm | "2025-02-03T23:41:38Z" | 11,832 | 77 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"feature-extraction",
"mteb",
"transformers",
"modernbert",
"custom_code",
"arxiv:2410.02525",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2025-01-13T18:04:14Z" | ---
tags:
- mteb
- transformers
- sentence-transformers
- modernbert
base_model: answerdotai/ModernBERT-base
model-index:
- name: cde-small-v2
results:
- dataset:
config: en
name: MTEB AmazonCounterfactualClassification (en)
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
split: test
type: mteb/amazon_counterfactual
metrics:
- type: accuracy
value: 86.01490000000001
- type: f1
value: 80.938
- type: f1_weighted
value: 86.9232
- type: ap
value: 54.949099999999994
- type: ap_weighted
value: 54.949099999999994
- type: main_score
value: 86.01490000000001
task:
type: Classification
- dataset:
config: default
name: MTEB AmazonPolarityClassification (default)
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
split: test
type: mteb/amazon_polarity
metrics:
- type: accuracy
value: 96.0223
- type: f1
value: 96.0206
- type: f1_weighted
value: 96.0206
- type: ap
value: 93.8301
- type: ap_weighted
value: 93.8301
- type: main_score
value: 96.0223
task:
type: Classification
- dataset:
config: en
name: MTEB AmazonReviewsClassification (en)
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
split: test
type: mteb/amazon_reviews_multi
metrics:
- type: accuracy
value: 55.096000000000004
- type: f1
value: 54.4353
- type: f1_weighted
value: 54.4353
- type: main_score
value: 55.096000000000004
task:
type: Classification
- dataset:
config: default
name: MTEB ArguAna (default)
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
split: test
type: mteb/arguana
metrics:
- type: ndcg_at_1
value: 54.125
- type: ndcg_at_3
value: 69.009
- type: ndcg_at_5
value: 72.722
- type: ndcg_at_10
value: 74.957
- type: ndcg_at_20
value: 75.801
- type: ndcg_at_100
value: 75.986
- type: ndcg_at_1000
value: 76.015
- type: map_at_1
value: 54.125
- type: map_at_3
value: 65.375
- type: map_at_5
value: 67.448
- type: map_at_10
value: 68.38499999999999
- type: map_at_20
value: 68.636
- type: map_at_100
value: 68.66600000000001
- type: map_at_1000
value: 68.66799999999999
- type: recall_at_1
value: 54.125
- type: recall_at_3
value: 79.51599999999999
- type: recall_at_5
value: 88.478
- type: recall_at_10
value: 95.306
- type: recall_at_20
value: 98.506
- type: recall_at_100
value: 99.431
- type: recall_at_1000
value: 99.644
- type: precision_at_1
value: 54.125
- type: precision_at_3
value: 26.505000000000003
- type: precision_at_5
value: 17.696
- type: precision_at_10
value: 9.531
- type: precision_at_20
value: 4.925
- type: precision_at_100
value: 0.9939999999999999
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 54.623
- type: mrr_at_3
value: 65.505
- type: mrr_at_5
value: 67.6174
- type: mrr_at_10
value: 68.5664
- type: mrr_at_20
value: 68.8173
- type: mrr_at_100
value: 68.8476
- type: mrr_at_1000
value: 68.8489
- type: nauc_ndcg_at_1_max
value: -14.4789
- type: nauc_ndcg_at_1_std
value: -25.5432
- type: nauc_ndcg_at_1_diff1
value: 23.7267
- type: nauc_ndcg_at_3_max
value: -8.1401
- type: nauc_ndcg_at_3_std
value: -22.9099
- type: nauc_ndcg_at_3_diff1
value: 21.069499999999998
- type: nauc_ndcg_at_5_max
value: -8.4301
- type: nauc_ndcg_at_5_std
value: -22.9185
- type: nauc_ndcg_at_5_diff1
value: 21.229100000000003
- type: nauc_ndcg_at_10_max
value: -8.6651
- type: nauc_ndcg_at_10_std
value: -23.5444
- type: nauc_ndcg_at_10_diff1
value: 21.9585
- type: nauc_ndcg_at_20_max
value: -9.285400000000001
- type: nauc_ndcg_at_20_std
value: -23.4297
- type: nauc_ndcg_at_20_diff1
value: 21.6731
- type: nauc_ndcg_at_100_max
value: -9.8693
- type: nauc_ndcg_at_100_std
value: -23.313
- type: nauc_ndcg_at_100_diff1
value: 21.5888
- type: nauc_ndcg_at_1000_max
value: -9.9675
- type: nauc_ndcg_at_1000_std
value: -23.3522
- type: nauc_ndcg_at_1000_diff1
value: 21.5714
- type: nauc_map_at_1_max
value: -14.4789
- type: nauc_map_at_1_std
value: -25.5432
- type: nauc_map_at_1_diff1
value: 23.7267
- type: nauc_map_at_3_max
value: -10.0484
- type: nauc_map_at_3_std
value: -23.3575
- type: nauc_map_at_3_diff1
value: 21.329
- type: nauc_map_at_5_max
value: -10.3514
- type: nauc_map_at_5_std
value: -23.3955
- type: nauc_map_at_5_diff1
value: 21.3531
- type: nauc_map_at_10_max
value: -10.484200000000001
- type: nauc_map_at_10_std
value: -23.6726
- type: nauc_map_at_10_diff1
value: 21.6458
- type: nauc_map_at_20_max
value: -10.638499999999999
- type: nauc_map_at_20_std
value: -23.6588
- type: nauc_map_at_20_diff1
value: 21.576600000000003
- type: nauc_map_at_100_max
value: -10.717400000000001
- type: nauc_map_at_100_std
value: -23.6559
- type: nauc_map_at_100_diff1
value: 21.5688
- type: nauc_map_at_1000_max
value: -10.7203
- type: nauc_map_at_1000_std
value: -23.6557
- type: nauc_map_at_1000_diff1
value: 21.5682
- type: nauc_recall_at_1_max
value: -14.4789
- type: nauc_recall_at_1_std
value: -25.5432
- type: nauc_recall_at_1_diff1
value: 23.7267
- type: nauc_recall_at_3_max
value: -0.2134
- type: nauc_recall_at_3_std
value: -21.251800000000003
- type: nauc_recall_at_3_diff1
value: 20.3069
- type: nauc_recall_at_5_max
value: 4.109100000000001
- type: nauc_recall_at_5_std
value: -20.1382
- type: nauc_recall_at_5_diff1
value: 21.1976
- type: nauc_recall_at_10_max
value: 18.3416
- type: nauc_recall_at_10_std
value: -22.9791
- type: nauc_recall_at_10_diff1
value: 29.4668
- type: nauc_recall_at_20_max
value: 45.3219
- type: nauc_recall_at_20_std
value: -14.8366
- type: nauc_recall_at_20_diff1
value: 31.829800000000002
- type: nauc_recall_at_100_max
value: 38.8075
- type: nauc_recall_at_100_std
value: 25.4176
- type: nauc_recall_at_100_diff1
value: 32.2733
- type: nauc_recall_at_1000_max
value: 28.1372
- type: nauc_recall_at_1000_std
value: 35.442
- type: nauc_recall_at_1000_diff1
value: 31.8247
- type: nauc_precision_at_1_max
value: -14.4789
- type: nauc_precision_at_1_std
value: -25.5432
- type: nauc_precision_at_1_diff1
value: 23.7267
- type: nauc_precision_at_3_max
value: -0.2134
- type: nauc_precision_at_3_std
value: -21.251800000000003
- type: nauc_precision_at_3_diff1
value: 20.3069
- type: nauc_precision_at_5_max
value: 4.109100000000001
- type: nauc_precision_at_5_std
value: -20.1382
- type: nauc_precision_at_5_diff1
value: 21.1976
- type: nauc_precision_at_10_max
value: 18.3416
- type: nauc_precision_at_10_std
value: -22.9791
- type: nauc_precision_at_10_diff1
value: 29.4668
- type: nauc_precision_at_20_max
value: 45.3219
- type: nauc_precision_at_20_std
value: -14.8366
- type: nauc_precision_at_20_diff1
value: 31.829800000000002
- type: nauc_precision_at_100_max
value: 38.8075
- type: nauc_precision_at_100_std
value: 25.4176
- type: nauc_precision_at_100_diff1
value: 32.2733
- type: nauc_precision_at_1000_max
value: 28.1372
- type: nauc_precision_at_1000_std
value: 35.442
- type: nauc_precision_at_1000_diff1
value: 31.8247
- type: nauc_mrr_at_1_max
value: -14.066600000000001
- type: nauc_mrr_at_1_std
value: -25.0145
- type: nauc_mrr_at_1_diff1
value: 22.361900000000002
- type: nauc_mrr_at_3_max
value: -10.6465
- type: nauc_mrr_at_3_std
value: -23.4323
- type: nauc_mrr_at_3_diff1
value: 19.758899999999997
- type: nauc_mrr_at_5_max
value: -10.7144
- type: nauc_mrr_at_5_std
value: -23.2823
- type: nauc_mrr_at_5_diff1
value: 19.8552
- type: nauc_mrr_at_10_max
value: -10.7815
- type: nauc_mrr_at_10_std
value: -23.51
- type: nauc_mrr_at_10_diff1
value: 20.157
- type: nauc_mrr_at_20_max
value: -10.9391
- type: nauc_mrr_at_20_std
value: -23.4946
- type: nauc_mrr_at_20_diff1
value: 20.072400000000002
- type: nauc_mrr_at_100_max
value: -11.018500000000001
- type: nauc_mrr_at_100_std
value: -23.491400000000002
- type: nauc_mrr_at_100_diff1
value: 20.0627
- type: nauc_mrr_at_1000_max
value: -11.0214
- type: nauc_mrr_at_1000_std
value: -23.491300000000003
- type: nauc_mrr_at_1000_diff1
value: 20.061999999999998
- type: main_score
value: 74.957
task:
type: Retrieval
- dataset:
config: default
name: MTEB ArxivClusteringP2P (default)
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
split: test
type: mteb/arxiv-clustering-p2p
metrics:
- type: v_measure
value: 50.5269
- type: v_measure_std
value: 14.0094
- type: main_score
value: 50.5269
task:
type: Clustering
- dataset:
config: default
name: MTEB ArxivClusteringS2S (default)
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
split: test
type: mteb/arxiv-clustering-s2s
metrics:
- type: v_measure
value: 41.620200000000004
- type: v_measure_std
value: 14.4842
- type: main_score
value: 41.620200000000004
task:
type: Clustering
- dataset:
config: default
name: MTEB AskUbuntuDupQuestions (default)
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
split: test
type: mteb/askubuntudupquestions-reranking
metrics:
- type: map
value: 61.790299999999995
- type: mrr
value: 75.8156
- type: nAUC_map_max
value: 26.151200000000003
- type: nAUC_map_std
value: 15.8953
- type: nAUC_map_diff1
value: 5.0684
- type: nAUC_mrr_max
value: 36.9643
- type: nAUC_mrr_std
value: 19.0749
- type: nAUC_mrr_diff1
value: 15.549399999999999
- type: main_score
value: 61.790299999999995
task:
type: Reranking
- dataset:
config: default
name: MTEB BIOSSES (default)
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
split: test
type: mteb/biosses-sts
metrics:
- type: pearson
value: 88.41590000000001
- type: spearman
value: 86.7116
- type: cosine_pearson
value: 88.41590000000001
- type: cosine_spearman
value: 86.7116
- type: manhattan_pearson
value: 86.2045
- type: manhattan_spearman
value: 85.7248
- type: euclidean_pearson
value: 86.2336
- type: euclidean_spearman
value: 85.861
- type: main_score
value: 86.7116
task:
type: STS
- dataset:
config: default
name: MTEB Banking77Classification (default)
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
split: test
type: mteb/banking77
metrics:
- type: accuracy
value: 88.3052
- type: f1
value: 88.2617
- type: f1_weighted
value: 88.2617
- type: main_score
value: 88.3052
task:
type: Classification
- dataset:
config: default
name: MTEB BiorxivClusteringP2P (default)
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
split: test
type: mteb/biorxiv-clustering-p2p
metrics:
- type: v_measure
value: 45.4377
- type: v_measure_std
value: 0.8543000000000001
- type: main_score
value: 45.4377
task:
type: Clustering
- dataset:
config: default
name: MTEB BiorxivClusteringS2S (default)
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
split: test
type: mteb/biorxiv-clustering-s2s
metrics:
- type: v_measure
value: 39.6472
- type: v_measure_std
value: 0.7081999999999999
- type: main_score
value: 39.6472
task:
type: Clustering
- dataset:
config: default
name: MTEB CQADupstackAndroidRetrieval (default)
revision: f46a197baaae43b4f621051089b82a364682dfeb
split: test
type: mteb/cqadupstack-android
metrics:
- type: ndcg_at_1
value: 39.342
- type: ndcg_at_3
value: 44.718999999999994
- type: ndcg_at_5
value: 47.449999999999996
- type: ndcg_at_10
value: 50.17
- type: ndcg_at_20
value: 52.366
- type: ndcg_at_100
value: 55.400000000000006
- type: ndcg_at_1000
value: 57.13399999999999
- type: map_at_1
value: 32.300000000000004
- type: map_at_3
value: 39.937
- type: map_at_5
value: 42.141
- type: map_at_10
value: 43.681
- type: map_at_20
value: 44.516
- type: map_at_100
value: 45.14
- type: map_at_1000
value: 45.25
- type: recall_at_1
value: 32.300000000000004
- type: recall_at_3
value: 47.12
- type: recall_at_5
value: 54.581
- type: recall_at_10
value: 62.873000000000005
- type: recall_at_20
value: 70.604
- type: recall_at_100
value: 84.465
- type: recall_at_1000
value: 95.299
- type: precision_at_1
value: 39.342
- type: precision_at_3
value: 21.459
- type: precision_at_5
value: 15.622
- type: precision_at_10
value: 9.514
- type: precision_at_20
value: 5.665
- type: precision_at_100
value: 1.5150000000000001
- type: precision_at_1000
value: 0.19499999999999998
- type: mrr_at_1
value: 39.3419
- type: mrr_at_3
value: 46.805
- type: mrr_at_5
value: 48.5861
- type: mrr_at_10
value: 49.6697
- type: mrr_at_20
value: 50.131
- type: mrr_at_100
value: 50.373599999999996
- type: mrr_at_1000
value: 50.4106
- type: nauc_ndcg_at_1_max
value: 40.0004
- type: nauc_ndcg_at_1_std
value: -1.8753
- type: nauc_ndcg_at_1_diff1
value: 45.9146
- type: nauc_ndcg_at_3_max
value: 41.3777
- type: nauc_ndcg_at_3_std
value: -1.2817
- type: nauc_ndcg_at_3_diff1
value: 42.710100000000004
- type: nauc_ndcg_at_5_max
value: 42.4211
- type: nauc_ndcg_at_5_std
value: -0.6910999999999999
- type: nauc_ndcg_at_5_diff1
value: 42.9048
- type: nauc_ndcg_at_10_max
value: 42.609399999999994
- type: nauc_ndcg_at_10_std
value: 0.4398
- type: nauc_ndcg_at_10_diff1
value: 42.4967
- type: nauc_ndcg_at_20_max
value: 42.7921
- type: nauc_ndcg_at_20_std
value: 0.9266
- type: nauc_ndcg_at_20_diff1
value: 42.701899999999995
- type: nauc_ndcg_at_100_max
value: 43.4878
- type: nauc_ndcg_at_100_std
value: 2.2893
- type: nauc_ndcg_at_100_diff1
value: 42.735
- type: nauc_ndcg_at_1000_max
value: 43.3776
- type: nauc_ndcg_at_1000_std
value: 2.1375
- type: nauc_ndcg_at_1000_diff1
value: 42.6437
- type: nauc_map_at_1_max
value: 37.573499999999996
- type: nauc_map_at_1_std
value: -1.4611
- type: nauc_map_at_1_diff1
value: 50.0479
- type: nauc_map_at_3_max
value: 40.5952
- type: nauc_map_at_3_std
value: -1.7034
- type: nauc_map_at_3_diff1
value: 45.7247
- type: nauc_map_at_5_max
value: 41.3854
- type: nauc_map_at_5_std
value: -1.5435
- type: nauc_map_at_5_diff1
value: 45.278400000000005
- type: nauc_map_at_10_max
value: 41.7269
- type: nauc_map_at_10_std
value: -1.0763
- type: nauc_map_at_10_diff1
value: 45.0862
- type: nauc_map_at_20_max
value: 42.0241
- type: nauc_map_at_20_std
value: -0.8463999999999999
- type: nauc_map_at_20_diff1
value: 45.1365
- type: nauc_map_at_100_max
value: 42.248200000000004
- type: nauc_map_at_100_std
value: -0.6139
- type: nauc_map_at_100_diff1
value: 45.0658
- type: nauc_map_at_1000_max
value: 42.2442
- type: nauc_map_at_1000_std
value: -0.6187
- type: nauc_map_at_1000_diff1
value: 45.0382
- type: nauc_recall_at_1_max
value: 37.573499999999996
- type: nauc_recall_at_1_std
value: -1.4611
- type: nauc_recall_at_1_diff1
value: 50.0479
- type: nauc_recall_at_3_max
value: 39.9536
- type: nauc_recall_at_3_std
value: -0.132
- type: nauc_recall_at_3_diff1
value: 39.6892
- type: nauc_recall_at_5_max
value: 41.428799999999995
- type: nauc_recall_at_5_std
value: 1.2703
- type: nauc_recall_at_5_diff1
value: 38.2213
- type: nauc_recall_at_10_max
value: 41.3254
- type: nauc_recall_at_10_std
value: 4.9163
- type: nauc_recall_at_10_diff1
value: 35.1215
- type: nauc_recall_at_20_max
value: 41.3807
- type: nauc_recall_at_20_std
value: 7.3897
- type: nauc_recall_at_20_diff1
value: 33.7864
- type: nauc_recall_at_100_max
value: 49.6612
- type: nauc_recall_at_100_std
value: 25.1511
- type: nauc_recall_at_100_diff1
value: 33.968199999999996
- type: nauc_recall_at_1000_max
value: 71.2452
- type: nauc_recall_at_1000_std
value: 68.7065
- type: nauc_recall_at_1000_diff1
value: 33.0124
- type: nauc_precision_at_1_max
value: 40.0004
- type: nauc_precision_at_1_std
value: -1.8753
- type: nauc_precision_at_1_diff1
value: 45.9146
- type: nauc_precision_at_3_max
value: 36.741800000000005
- type: nauc_precision_at_3_std
value: -1.2777
- type: nauc_precision_at_3_diff1
value: 23.3539
- type: nauc_precision_at_5_max
value: 32.9756
- type: nauc_precision_at_5_std
value: -0.1613
- type: nauc_precision_at_5_diff1
value: 15.866
- type: nauc_precision_at_10_max
value: 25.7284
- type: nauc_precision_at_10_std
value: 2.7586
- type: nauc_precision_at_10_diff1
value: 6.579899999999999
- type: nauc_precision_at_20_max
value: 18.8213
- type: nauc_precision_at_20_std
value: 3.6470000000000002
- type: nauc_precision_at_20_diff1
value: -0.45690000000000003
- type: nauc_precision_at_100_max
value: 5.7518
- type: nauc_precision_at_100_std
value: 3.4711
- type: nauc_precision_at_100_diff1
value: -12.380700000000001
- type: nauc_precision_at_1000_max
value: -8.6862
- type: nauc_precision_at_1000_std
value: -4.5796
- type: nauc_precision_at_1000_diff1
value: -19.9355
- type: nauc_mrr_at_1_max
value: 40.0004
- type: nauc_mrr_at_1_std
value: -1.8753
- type: nauc_mrr_at_1_diff1
value: 45.9146
- type: nauc_mrr_at_3_max
value: 40.686
- type: nauc_mrr_at_3_std
value: -0.8626999999999999
- type: nauc_mrr_at_3_diff1
value: 41.4552
- type: nauc_mrr_at_5_max
value: 41.2445
- type: nauc_mrr_at_5_std
value: -0.7058
- type: nauc_mrr_at_5_diff1
value: 41.7244
- type: nauc_mrr_at_10_max
value: 41.1575
- type: nauc_mrr_at_10_std
value: -0.44489999999999996
- type: nauc_mrr_at_10_diff1
value: 41.355199999999996
- type: nauc_mrr_at_20_max
value: 41.1548
- type: nauc_mrr_at_20_std
value: -0.33
- type: nauc_mrr_at_20_diff1
value: 41.444199999999995
- type: nauc_mrr_at_100_max
value: 41.1908
- type: nauc_mrr_at_100_std
value: -0.3263
- type: nauc_mrr_at_100_diff1
value: 41.505900000000004
- type: nauc_mrr_at_1000_max
value: 41.1935
- type: nauc_mrr_at_1000_std
value: -0.3216
- type: nauc_mrr_at_1000_diff1
value: 41.5128
- type: main_score
value: 50.17
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackEnglishRetrieval (default)
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
split: test
type: mteb/cqadupstack-english
metrics:
- type: ndcg_at_1
value: 42.102000000000004
- type: ndcg_at_3
value: 45.741
- type: ndcg_at_5
value: 47.734
- type: ndcg_at_10
value: 49.732
- type: ndcg_at_20
value: 51.295
- type: ndcg_at_100
value: 53.935
- type: ndcg_at_1000
value: 55.765
- type: map_at_1
value: 33.306999999999995
- type: map_at_3
value: 40.953
- type: map_at_5
value: 42.731
- type: map_at_10
value: 44.022
- type: map_at_20
value: 44.693
- type: map_at_100
value: 45.259
- type: map_at_1000
value: 45.383
- type: recall_at_1
value: 33.306999999999995
- type: recall_at_3
value: 47.127
- type: recall_at_5
value: 52.89
- type: recall_at_10
value: 59.16400000000001
- type: recall_at_20
value: 64.85
- type: recall_at_100
value: 77.206
- type: recall_at_1000
value: 88.701
- type: precision_at_1
value: 42.102000000000004
- type: precision_at_3
value: 21.975
- type: precision_at_5
value: 15.465000000000002
- type: precision_at_10
value: 9.229
- type: precision_at_20
value: 5.404
- type: precision_at_100
value: 1.461
- type: precision_at_1000
value: 0.192
- type: mrr_at_1
value: 42.1019
- type: mrr_at_3
value: 48.322700000000005
- type: mrr_at_5
value: 49.593399999999995
- type: mrr_at_10
value: 50.364399999999996
- type: mrr_at_20
value: 50.7215
- type: mrr_at_100
value: 50.962300000000006
- type: mrr_at_1000
value: 50.9999
- type: nauc_ndcg_at_1_max
value: 40.6054
- type: nauc_ndcg_at_1_std
value: -3.4602
- type: nauc_ndcg_at_1_diff1
value: 54.0346
- type: nauc_ndcg_at_3_max
value: 40.0946
- type: nauc_ndcg_at_3_std
value: -3.7981000000000003
- type: nauc_ndcg_at_3_diff1
value: 49.2481
- type: nauc_ndcg_at_5_max
value: 40.198699999999995
- type: nauc_ndcg_at_5_std
value: -3.2983
- type: nauc_ndcg_at_5_diff1
value: 48.7252
- type: nauc_ndcg_at_10_max
value: 40.6072
- type: nauc_ndcg_at_10_std
value: -3.472
- type: nauc_ndcg_at_10_diff1
value: 48.7302
- type: nauc_ndcg_at_20_max
value: 41.0897
- type: nauc_ndcg_at_20_std
value: -2.8645
- type: nauc_ndcg_at_20_diff1
value: 48.8834
- type: nauc_ndcg_at_100_max
value: 41.450900000000004
- type: nauc_ndcg_at_100_std
value: -1.3305
- type: nauc_ndcg_at_100_diff1
value: 48.2699
- type: nauc_ndcg_at_1000_max
value: 41.4853
- type: nauc_ndcg_at_1000_std
value: -0.7634
- type: nauc_ndcg_at_1000_diff1
value: 48.28
- type: nauc_map_at_1_max
value: 31.776100000000003
- type: nauc_map_at_1_std
value: -12.5085
- type: nauc_map_at_1_diff1
value: 56.84630000000001
- type: nauc_map_at_3_max
value: 36.3131
- type: nauc_map_at_3_std
value: -9.3976
- type: nauc_map_at_3_diff1
value: 52.4471
- type: nauc_map_at_5_max
value: 37.330799999999996
- type: nauc_map_at_5_std
value: -8.0619
- type: nauc_map_at_5_diff1
value: 51.692800000000005
- type: nauc_map_at_10_max
value: 38.406400000000005
- type: nauc_map_at_10_std
value: -7.1754
- type: nauc_map_at_10_diff1
value: 51.46849999999999
- type: nauc_map_at_20_max
value: 38.940000000000005
- type: nauc_map_at_20_std
value: -6.4747
- type: nauc_map_at_20_diff1
value: 51.34570000000001
- type: nauc_map_at_100_max
value: 39.3424
- type: nauc_map_at_100_std
value: -5.7301
- type: nauc_map_at_100_diff1
value: 51.0633
- type: nauc_map_at_1000_max
value: 39.3905
- type: nauc_map_at_1000_std
value: -5.5938
- type: nauc_map_at_1000_diff1
value: 51.04109999999999
- type: nauc_recall_at_1_max
value: 31.776100000000003
- type: nauc_recall_at_1_std
value: -12.5085
- type: nauc_recall_at_1_diff1
value: 56.84630000000001
- type: nauc_recall_at_3_max
value: 35.702
- type: nauc_recall_at_3_std
value: -7.3138
- type: nauc_recall_at_3_diff1
value: 46.3454
- type: nauc_recall_at_5_max
value: 36.459399999999995
- type: nauc_recall_at_5_std
value: -4.678100000000001
- type: nauc_recall_at_5_diff1
value: 43.6423
- type: nauc_recall_at_10_max
value: 37.3534
- type: nauc_recall_at_10_std
value: -4.0492
- type: nauc_recall_at_10_diff1
value: 41.7513
- type: nauc_recall_at_20_max
value: 39.379999999999995
- type: nauc_recall_at_20_std
value: -1.0078
- type: nauc_recall_at_20_diff1
value: 41.638
- type: nauc_recall_at_100_max
value: 40.705799999999996
- type: nauc_recall_at_100_std
value: 8.9477
- type: nauc_recall_at_100_diff1
value: 35.7987
- type: nauc_recall_at_1000_max
value: 41.560399999999994
- type: nauc_recall_at_1000_std
value: 19.6108
- type: nauc_recall_at_1000_diff1
value: 30.694399999999998
- type: nauc_precision_at_1_max
value: 40.6054
- type: nauc_precision_at_1_std
value: -3.4602
- type: nauc_precision_at_1_diff1
value: 54.0346
- type: nauc_precision_at_3_max
value: 42.0217
- type: nauc_precision_at_3_std
value: 10.3896
- type: nauc_precision_at_3_diff1
value: 26.7498
- type: nauc_precision_at_5_max
value: 40.4414
- type: nauc_precision_at_5_std
value: 18.177599999999998
- type: nauc_precision_at_5_diff1
value: 16.9455
- type: nauc_precision_at_10_max
value: 38.921
- type: nauc_precision_at_10_std
value: 24.1093
- type: nauc_precision_at_10_diff1
value: 8.4258
- type: nauc_precision_at_20_max
value: 34.620200000000004
- type: nauc_precision_at_20_std
value: 29.351399999999998
- type: nauc_precision_at_20_diff1
value: 0.15360000000000001
- type: nauc_precision_at_100_max
value: 25.230000000000004
- type: nauc_precision_at_100_std
value: 36.8424
- type: nauc_precision_at_100_diff1
value: -12.225900000000001
- type: nauc_precision_at_1000_max
value: 13.1715
- type: nauc_precision_at_1000_std
value: 34.7096
- type: nauc_precision_at_1000_diff1
value: -16.5331
- type: nauc_mrr_at_1_max
value: 40.6054
- type: nauc_mrr_at_1_std
value: -3.4602
- type: nauc_mrr_at_1_diff1
value: 54.0346
- type: nauc_mrr_at_3_max
value: 42.2127
- type: nauc_mrr_at_3_std
value: -1.0392000000000001
- type: nauc_mrr_at_3_diff1
value: 49.748
- type: nauc_mrr_at_5_max
value: 42.2638
- type: nauc_mrr_at_5_std
value: -0.40049999999999997
- type: nauc_mrr_at_5_diff1
value: 49.3009
- type: nauc_mrr_at_10_max
value: 42.0477
- type: nauc_mrr_at_10_std
value: -0.6505000000000001
- type: nauc_mrr_at_10_diff1
value: 49.0978
- type: nauc_mrr_at_20_max
value: 42.0895
- type: nauc_mrr_at_20_std
value: -0.5649000000000001
- type: nauc_mrr_at_20_diff1
value: 49.1893
- type: nauc_mrr_at_100_max
value: 42.0951
- type: nauc_mrr_at_100_std
value: -0.5555
- type: nauc_mrr_at_100_diff1
value: 49.2047
- type: nauc_mrr_at_1000_max
value: 42.0946
- type: nauc_mrr_at_1000_std
value: -0.5584
- type: nauc_mrr_at_1000_diff1
value: 49.207699999999996
- type: main_score
value: 49.732
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackGamingRetrieval (default)
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
split: test
type: mteb/cqadupstack-gaming
metrics:
- type: ndcg_at_1
value: 48.276
- type: ndcg_at_3
value: 53.727000000000004
- type: ndcg_at_5
value: 56.511
- type: ndcg_at_10
value: 59.023
- type: ndcg_at_20
value: 60.802
- type: ndcg_at_100
value: 62.980999999999995
- type: ndcg_at_1000
value: 64.13600000000001
- type: map_at_1
value: 42.347
- type: map_at_3
value: 50.349999999999994
- type: map_at_5
value: 52.276999999999994
- type: map_at_10
value: 53.6
- type: map_at_20
value: 54.217000000000006
- type: map_at_100
value: 54.605000000000004
- type: map_at_1000
value: 54.663
- type: recall_at_1
value: 42.347
- type: recall_at_3
value: 57.499
- type: recall_at_5
value: 64.269
- type: recall_at_10
value: 71.568
- type: recall_at_20
value: 78.125
- type: recall_at_100
value: 88.699
- type: recall_at_1000
value: 96.887
- type: precision_at_1
value: 48.276
- type: precision_at_3
value: 23.49
- type: precision_at_5
value: 16.262999999999998
- type: precision_at_10
value: 9.322999999999999
- type: precision_at_20
value: 5.21
- type: precision_at_100
value: 1.22
- type: precision_at_1000
value: 0.136
- type: mrr_at_1
value: 48.2759
- type: mrr_at_3
value: 54.5246
- type: mrr_at_5
value: 56.0982
- type: mrr_at_10
value: 56.961
- type: mrr_at_20
value: 57.391400000000004
- type: mrr_at_100
value: 57.6295
- type: mrr_at_1000
value: 57.66139999999999
- type: nauc_ndcg_at_1_max
value: 43.5037
- type: nauc_ndcg_at_1_std
value: -7.6921
- type: nauc_ndcg_at_1_diff1
value: 58.544700000000006
- type: nauc_ndcg_at_3_max
value: 44.630900000000004
- type: nauc_ndcg_at_3_std
value: -6.260300000000001
- type: nauc_ndcg_at_3_diff1
value: 56.120999999999995
- type: nauc_ndcg_at_5_max
value: 45.1267
- type: nauc_ndcg_at_5_std
value: -5.5512
- type: nauc_ndcg_at_5_diff1
value: 54.8272
- type: nauc_ndcg_at_10_max
value: 45.691199999999995
- type: nauc_ndcg_at_10_std
value: -4.1767
- type: nauc_ndcg_at_10_diff1
value: 53.8565
- type: nauc_ndcg_at_20_max
value: 46.0581
- type: nauc_ndcg_at_20_std
value: -2.4019
- type: nauc_ndcg_at_20_diff1
value: 53.67150000000001
- type: nauc_ndcg_at_100_max
value: 46.3071
- type: nauc_ndcg_at_100_std
value: -1.856
- type: nauc_ndcg_at_100_diff1
value: 54.2616
- type: nauc_ndcg_at_1000_max
value: 46.3054
- type: nauc_ndcg_at_1000_std
value: -2.4795000000000003
- type: nauc_ndcg_at_1000_diff1
value: 54.6332
- type: nauc_map_at_1_max
value: 37.3915
- type: nauc_map_at_1_std
value: -9.6709
- type: nauc_map_at_1_diff1
value: 59.0807
- type: nauc_map_at_3_max
value: 42.3532
- type: nauc_map_at_3_std
value: -8.4634
- type: nauc_map_at_3_diff1
value: 57.342400000000005
- type: nauc_map_at_5_max
value: 43.065799999999996
- type: nauc_map_at_5_std
value: -7.430000000000001
- type: nauc_map_at_5_diff1
value: 56.5453
- type: nauc_map_at_10_max
value: 43.4845
- type: nauc_map_at_10_std
value: -6.5406
- type: nauc_map_at_10_diff1
value: 55.959199999999996
- type: nauc_map_at_20_max
value: 43.8265
- type: nauc_map_at_20_std
value: -5.8393
- type: nauc_map_at_20_diff1
value: 55.8438
- type: nauc_map_at_100_max
value: 44.014399999999995
- type: nauc_map_at_100_std
value: -5.6227
- type: nauc_map_at_100_diff1
value: 55.8762
- type: nauc_map_at_1000_max
value: 44.0386
- type: nauc_map_at_1000_std
value: -5.6262
- type: nauc_map_at_1000_diff1
value: 55.888099999999994
- type: nauc_recall_at_1_max
value: 37.3915
- type: nauc_recall_at_1_std
value: -9.6709
- type: nauc_recall_at_1_diff1
value: 59.0807
- type: nauc_recall_at_3_max
value: 43.8264
- type: nauc_recall_at_3_std
value: -6.309099999999999
- type: nauc_recall_at_3_diff1
value: 53.4872
- type: nauc_recall_at_5_max
value: 44.237300000000005
- type: nauc_recall_at_5_std
value: -4.1856
- type: nauc_recall_at_5_diff1
value: 49.3654
- type: nauc_recall_at_10_max
value: 46.7914
- type: nauc_recall_at_10_std
value: 1.3229
- type: nauc_recall_at_10_diff1
value: 45.1973
- type: nauc_recall_at_20_max
value: 49.560500000000005
- type: nauc_recall_at_20_std
value: 11.9406
- type: nauc_recall_at_20_diff1
value: 42.821999999999996
- type: nauc_recall_at_100_max
value: 53.3482
- type: nauc_recall_at_100_std
value: 27.375
- type: nauc_recall_at_100_diff1
value: 44.0535
- type: nauc_recall_at_1000_max
value: 64.18
- type: nauc_recall_at_1000_std
value: 53.603699999999996
- type: nauc_recall_at_1000_diff1
value: 50.1113
- type: nauc_precision_at_1_max
value: 43.5037
- type: nauc_precision_at_1_std
value: -7.6921
- type: nauc_precision_at_1_diff1
value: 58.544700000000006
- type: nauc_precision_at_3_max
value: 41.9145
- type: nauc_precision_at_3_std
value: 0.6891999999999999
- type: nauc_precision_at_3_diff1
value: 35.0689
- type: nauc_precision_at_5_max
value: 38.553399999999996
- type: nauc_precision_at_5_std
value: 6.1493
- type: nauc_precision_at_5_diff1
value: 23.127
- type: nauc_precision_at_10_max
value: 34.076699999999995
- type: nauc_precision_at_10_std
value: 12.673300000000001
- type: nauc_precision_at_10_diff1
value: 10.7967
- type: nauc_precision_at_20_max
value: 31.9315
- type: nauc_precision_at_20_std
value: 21.0503
- type: nauc_precision_at_20_diff1
value: 1.9767
- type: nauc_precision_at_100_max
value: 24.287300000000002
- type: nauc_precision_at_100_std
value: 24.5746
- type: nauc_precision_at_100_diff1
value: -9.751700000000001
- type: nauc_precision_at_1000_max
value: 19.252
- type: nauc_precision_at_1000_std
value: 21.0394
- type: nauc_precision_at_1000_diff1
value: -16.8851
- type: nauc_mrr_at_1_max
value: 43.5037
- type: nauc_mrr_at_1_std
value: -7.6921
- type: nauc_mrr_at_1_diff1
value: 58.544700000000006
- type: nauc_mrr_at_3_max
value: 45.9732
- type: nauc_mrr_at_3_std
value: -5.3982
- type: nauc_mrr_at_3_diff1
value: 56.1002
- type: nauc_mrr_at_5_max
value: 45.9223
- type: nauc_mrr_at_5_std
value: -5.3386000000000005
- type: nauc_mrr_at_5_diff1
value: 55.196
- type: nauc_mrr_at_10_max
value: 46.1619
- type: nauc_mrr_at_10_std
value: -4.965
- type: nauc_mrr_at_10_diff1
value: 55.081199999999995
- type: nauc_mrr_at_20_max
value: 46.238600000000005
- type: nauc_mrr_at_20_std
value: -4.5938
- type: nauc_mrr_at_20_diff1
value: 55.0906
- type: nauc_mrr_at_100_max
value: 46.2087
- type: nauc_mrr_at_100_std
value: -4.6099
- type: nauc_mrr_at_100_diff1
value: 55.1922
- type: nauc_mrr_at_1000_max
value: 46.2022
- type: nauc_mrr_at_1000_std
value: -4.6231
- type: nauc_mrr_at_1000_diff1
value: 55.209399999999995
- type: main_score
value: 59.023
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackGisRetrieval (default)
revision: 5003b3064772da1887988e05400cf3806fe491f2
split: test
type: mteb/cqadupstack-gis
metrics:
- type: ndcg_at_1
value: 27.797
- type: ndcg_at_3
value: 34.787
- type: ndcg_at_5
value: 37.326
- type: ndcg_at_10
value: 39.583
- type: ndcg_at_20
value: 41.677
- type: ndcg_at_100
value: 44.932
- type: ndcg_at_1000
value: 46.893
- type: map_at_1
value: 26.209
- type: map_at_3
value: 32.365
- type: map_at_5
value: 33.819
- type: map_at_10
value: 34.827999999999996
- type: map_at_20
value: 35.447
- type: map_at_100
value: 35.93
- type: map_at_1000
value: 36.007
- type: recall_at_1
value: 26.209
- type: recall_at_3
value: 39.562999999999995
- type: recall_at_5
value: 45.594
- type: recall_at_10
value: 52.236000000000004
- type: recall_at_20
value: 60.019
- type: recall_at_100
value: 76.6
- type: recall_at_1000
value: 91.389
- type: precision_at_1
value: 27.797
- type: precision_at_3
value: 14.539
- type: precision_at_5
value: 10.215
- type: precision_at_10
value: 5.944
- type: precision_at_20
value: 3.469
- type: precision_at_100
value: 0.907
- type: precision_at_1000
value: 0.11100000000000002
- type: mrr_at_1
value: 27.796599999999998
- type: mrr_at_3
value: 34.2373
- type: mrr_at_5
value: 35.762699999999995
- type: mrr_at_10
value: 36.6849
- type: mrr_at_20
value: 37.257600000000004
- type: mrr_at_100
value: 37.6676
- type: mrr_at_1000
value: 37.723800000000004
- type: nauc_ndcg_at_1_max
value: 27.845599999999997
- type: nauc_ndcg_at_1_std
value: -8.0177
- type: nauc_ndcg_at_1_diff1
value: 44.9034
- type: nauc_ndcg_at_3_max
value: 28.7984
- type: nauc_ndcg_at_3_std
value: -6.7625
- type: nauc_ndcg_at_3_diff1
value: 38.344
- type: nauc_ndcg_at_5_max
value: 29.8333
- type: nauc_ndcg_at_5_std
value: -5.305
- type: nauc_ndcg_at_5_diff1
value: 37.8077
- type: nauc_ndcg_at_10_max
value: 30.0319
- type: nauc_ndcg_at_10_std
value: -3.7874
- type: nauc_ndcg_at_10_diff1
value: 36.7867
- type: nauc_ndcg_at_20_max
value: 29.768499999999996
- type: nauc_ndcg_at_20_std
value: -4.4994
- type: nauc_ndcg_at_20_diff1
value: 36.2424
- type: nauc_ndcg_at_100_max
value: 29.6882
- type: nauc_ndcg_at_100_std
value: -3.0686999999999998
- type: nauc_ndcg_at_100_diff1
value: 35.5097
- type: nauc_ndcg_at_1000_max
value: 30.0696
- type: nauc_ndcg_at_1000_std
value: -3.0852
- type: nauc_ndcg_at_1000_diff1
value: 36.168
- type: nauc_map_at_1_max
value: 26.105800000000002
- type: nauc_map_at_1_std
value: -9.0379
- type: nauc_map_at_1_diff1
value: 46.5148
- type: nauc_map_at_3_max
value: 27.851100000000002
- type: nauc_map_at_3_std
value: -7.6508
- type: nauc_map_at_3_diff1
value: 40.441
- type: nauc_map_at_5_max
value: 28.498600000000003
- type: nauc_map_at_5_std
value: -6.8919
- type: nauc_map_at_5_diff1
value: 40.2012
- type: nauc_map_at_10_max
value: 28.754
- type: nauc_map_at_10_std
value: -6.1987
- type: nauc_map_at_10_diff1
value: 39.7856
- type: nauc_map_at_20_max
value: 28.7468
- type: nauc_map_at_20_std
value: -6.372999999999999
- type: nauc_map_at_20_diff1
value: 39.7445
- type: nauc_map_at_100_max
value: 28.762999999999998
- type: nauc_map_at_100_std
value: -6.1504
- type: nauc_map_at_100_diff1
value: 39.643699999999995
- type: nauc_map_at_1000_max
value: 28.7886
- type: nauc_map_at_1000_std
value: -6.1426
- type: nauc_map_at_1000_diff1
value: 39.6637
- type: nauc_recall_at_1_max
value: 26.105800000000002
- type: nauc_recall_at_1_std
value: -9.0379
- type: nauc_recall_at_1_diff1
value: 46.5148
- type: nauc_recall_at_3_max
value: 28.845399999999998
- type: nauc_recall_at_3_std
value: -4.6356
- type: nauc_recall_at_3_diff1
value: 32.9931
- type: nauc_recall_at_5_max
value: 31.3996
- type: nauc_recall_at_5_std
value: -1.7656
- type: nauc_recall_at_5_diff1
value: 31.254199999999997
- type: nauc_recall_at_10_max
value: 31.406
- type: nauc_recall_at_10_std
value: 2.6767
- type: nauc_recall_at_10_diff1
value: 27.5627
- type: nauc_recall_at_20_max
value: 29.6752
- type: nauc_recall_at_20_std
value: 0.0991
- type: nauc_recall_at_20_diff1
value: 24.0771
- type: nauc_recall_at_100_max
value: 28.4217
- type: nauc_recall_at_100_std
value: 12.0071
- type: nauc_recall_at_100_diff1
value: 13.231100000000001
- type: nauc_recall_at_1000_max
value: 35.8245
- type: nauc_recall_at_1000_std
value: 30.705
- type: nauc_recall_at_1000_diff1
value: 2.7809
- type: nauc_precision_at_1_max
value: 27.845599999999997
- type: nauc_precision_at_1_std
value: -8.0177
- type: nauc_precision_at_1_diff1
value: 44.9034
- type: nauc_precision_at_3_max
value: 32.706
- type: nauc_precision_at_3_std
value: -3.9037
- type: nauc_precision_at_3_diff1
value: 29.921599999999998
- type: nauc_precision_at_5_max
value: 34.192
- type: nauc_precision_at_5_std
value: -0.5177
- type: nauc_precision_at_5_diff1
value: 28.4206
- type: nauc_precision_at_10_max
value: 33.6132
- type: nauc_precision_at_10_std
value: 4.372
- type: nauc_precision_at_10_diff1
value: 23.5257
- type: nauc_precision_at_20_max
value: 31.1237
- type: nauc_precision_at_20_std
value: 1.9191
- type: nauc_precision_at_20_diff1
value: 18.445700000000002
- type: nauc_precision_at_100_max
value: 22.5504
- type: nauc_precision_at_100_std
value: 11.1776
- type: nauc_precision_at_100_diff1
value: 3.3670999999999998
- type: nauc_precision_at_1000_max
value: 13.5905
- type: nauc_precision_at_1000_std
value: 12.9311
- type: nauc_precision_at_1000_diff1
value: -8.054699999999999
- type: nauc_mrr_at_1_max
value: 27.845599999999997
- type: nauc_mrr_at_1_std
value: -8.0177
- type: nauc_mrr_at_1_diff1
value: 44.9034
- type: nauc_mrr_at_3_max
value: 29.1589
- type: nauc_mrr_at_3_std
value: -6.4891000000000005
- type: nauc_mrr_at_3_diff1
value: 39.088699999999996
- type: nauc_mrr_at_5_max
value: 29.9228
- type: nauc_mrr_at_5_std
value: -5.6324
- type: nauc_mrr_at_5_diff1
value: 38.862
- type: nauc_mrr_at_10_max
value: 29.907600000000002
- type: nauc_mrr_at_10_std
value: -5.148
- type: nauc_mrr_at_10_diff1
value: 38.4778
- type: nauc_mrr_at_20_max
value: 29.8398
- type: nauc_mrr_at_20_std
value: -5.3067
- type: nauc_mrr_at_20_diff1
value: 38.275999999999996
- type: nauc_mrr_at_100_max
value: 29.828100000000003
- type: nauc_mrr_at_100_std
value: -5.1385
- type: nauc_mrr_at_100_diff1
value: 38.2314
- type: nauc_mrr_at_1000_max
value: 29.8443
- type: nauc_mrr_at_1000_std
value: -5.146
- type: nauc_mrr_at_1000_diff1
value: 38.2581
- type: main_score
value: 39.583
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackMathematicaRetrieval (default)
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
split: test
type: mteb/cqadupstack-mathematica
metrics:
- type: ndcg_at_1
value: 22.015
- type: ndcg_at_3
value: 25.941
- type: ndcg_at_5
value: 28.469
- type: ndcg_at_10
value: 31.391000000000002
- type: ndcg_at_20
value: 33.485
- type: ndcg_at_100
value: 37.145
- type: ndcg_at_1000
value: 39.909
- type: map_at_1
value: 17.580000000000002
- type: map_at_3
value: 22.900000000000002
- type: map_at_5
value: 24.498
- type: map_at_10
value: 25.823
- type: map_at_20
value: 26.429000000000002
- type: map_at_100
value: 27.029999999999998
- type: map_at_1000
value: 27.147
- type: recall_at_1
value: 17.580000000000002
- type: recall_at_3
value: 29.355999999999998
- type: recall_at_5
value: 35.634
- type: recall_at_10
value: 44.336
- type: recall_at_20
value: 51.661
- type: recall_at_100
value: 68.766
- type: recall_at_1000
value: 88.429
- type: precision_at_1
value: 22.015
- type: precision_at_3
value: 12.520999999999999
- type: precision_at_5
value: 9.254
- type: precision_at_10
value: 5.784000000000001
- type: precision_at_20
value: 3.514
- type: precision_at_100
value: 1.019
- type: precision_at_1000
value: 0.13899999999999998
- type: mrr_at_1
value: 22.0149
- type: mrr_at_3
value: 27.5705
- type: mrr_at_5
value: 29.168699999999998
- type: mrr_at_10
value: 30.352
- type: mrr_at_20
value: 30.968200000000003
- type: mrr_at_100
value: 31.3807
- type: mrr_at_1000
value: 31.4469
- type: nauc_ndcg_at_1_max
value: 21.2985
- type: nauc_ndcg_at_1_std
value: -4.6632
- type: nauc_ndcg_at_1_diff1
value: 36.1703
- type: nauc_ndcg_at_3_max
value: 23.2761
- type: nauc_ndcg_at_3_std
value: -2.9883
- type: nauc_ndcg_at_3_diff1
value: 31.11
- type: nauc_ndcg_at_5_max
value: 22.697400000000002
- type: nauc_ndcg_at_5_std
value: -2.6858
- type: nauc_ndcg_at_5_diff1
value: 29.1155
- type: nauc_ndcg_at_10_max
value: 21.745
- type: nauc_ndcg_at_10_std
value: -2.1321
- type: nauc_ndcg_at_10_diff1
value: 27.6691
- type: nauc_ndcg_at_20_max
value: 22.368
- type: nauc_ndcg_at_20_std
value: -1.1924000000000001
- type: nauc_ndcg_at_20_diff1
value: 27.453100000000003
- type: nauc_ndcg_at_100_max
value: 23.1279
- type: nauc_ndcg_at_100_std
value: 0.1931
- type: nauc_ndcg_at_100_diff1
value: 27.2613
- type: nauc_ndcg_at_1000_max
value: 23.5609
- type: nauc_ndcg_at_1000_std
value: 0.4277
- type: nauc_ndcg_at_1000_diff1
value: 27.898
- type: nauc_map_at_1_max
value: 22.1777
- type: nauc_map_at_1_std
value: -3.6511
- type: nauc_map_at_1_diff1
value: 35.193799999999996
- type: nauc_map_at_3_max
value: 22.6711
- type: nauc_map_at_3_std
value: -3.2921
- type: nauc_map_at_3_diff1
value: 31.647199999999998
- type: nauc_map_at_5_max
value: 22.3125
- type: nauc_map_at_5_std
value: -3.3684
- type: nauc_map_at_5_diff1
value: 30.6346
- type: nauc_map_at_10_max
value: 22.1293
- type: nauc_map_at_10_std
value: -3.0963000000000003
- type: nauc_map_at_10_diff1
value: 29.9676
- type: nauc_map_at_20_max
value: 22.345599999999997
- type: nauc_map_at_20_std
value: -2.7918
- type: nauc_map_at_20_diff1
value: 29.873300000000004
- type: nauc_map_at_100_max
value: 22.547600000000003
- type: nauc_map_at_100_std
value: -2.5456
- type: nauc_map_at_100_diff1
value: 29.8869
- type: nauc_map_at_1000_max
value: 22.5777
- type: nauc_map_at_1000_std
value: -2.5162
- type: nauc_map_at_1000_diff1
value: 29.9082
- type: nauc_recall_at_1_max
value: 22.1777
- type: nauc_recall_at_1_std
value: -3.6511
- type: nauc_recall_at_1_diff1
value: 35.193799999999996
- type: nauc_recall_at_3_max
value: 22.8589
- type: nauc_recall_at_3_std
value: -1.541
- type: nauc_recall_at_3_diff1
value: 26.8307
- type: nauc_recall_at_5_max
value: 21.2508
- type: nauc_recall_at_5_std
value: -1.6594000000000002
- type: nauc_recall_at_5_diff1
value: 23.0152
- type: nauc_recall_at_10_max
value: 18.4227
- type: nauc_recall_at_10_std
value: -0.29610000000000003
- type: nauc_recall_at_10_diff1
value: 19.0389
- type: nauc_recall_at_20_max
value: 20.0064
- type: nauc_recall_at_20_std
value: 2.6574
- type: nauc_recall_at_20_diff1
value: 18.1572
- type: nauc_recall_at_100_max
value: 22.8024
- type: nauc_recall_at_100_std
value: 11.629100000000001
- type: nauc_recall_at_100_diff1
value: 13.7353
- type: nauc_recall_at_1000_max
value: 33.8158
- type: nauc_recall_at_1000_std
value: 28.807
- type: nauc_recall_at_1000_diff1
value: 10.385900000000001
- type: nauc_precision_at_1_max
value: 21.2985
- type: nauc_precision_at_1_std
value: -4.6632
- type: nauc_precision_at_1_diff1
value: 36.1703
- type: nauc_precision_at_3_max
value: 23.8607
- type: nauc_precision_at_3_std
value: -1.2343
- type: nauc_precision_at_3_diff1
value: 26.056600000000003
- type: nauc_precision_at_5_max
value: 22.3303
- type: nauc_precision_at_5_std
value: -0.6769
- type: nauc_precision_at_5_diff1
value: 21.1393
- type: nauc_precision_at_10_max
value: 18.9603
- type: nauc_precision_at_10_std
value: 0.9261
- type: nauc_precision_at_10_diff1
value: 15.4373
- type: nauc_precision_at_20_max
value: 18.1666
- type: nauc_precision_at_20_std
value: 3.9616
- type: nauc_precision_at_20_diff1
value: 11.2774
- type: nauc_precision_at_100_max
value: 13.095399999999998
- type: nauc_precision_at_100_std
value: 7.7341999999999995
- type: nauc_precision_at_100_diff1
value: 3.3591999999999995
- type: nauc_precision_at_1000_max
value: 3.0223
- type: nauc_precision_at_1000_std
value: 4.3308
- type: nauc_precision_at_1000_diff1
value: -1.0134
- type: nauc_mrr_at_1_max
value: 21.2985
- type: nauc_mrr_at_1_std
value: -4.6632
- type: nauc_mrr_at_1_diff1
value: 36.1703
- type: nauc_mrr_at_3_max
value: 23.1376
- type: nauc_mrr_at_3_std
value: -3.228
- type: nauc_mrr_at_3_diff1
value: 33.150800000000004
- type: nauc_mrr_at_5_max
value: 22.7773
- type: nauc_mrr_at_5_std
value: -2.9971
- type: nauc_mrr_at_5_diff1
value: 31.8828
- type: nauc_mrr_at_10_max
value: 22.15
- type: nauc_mrr_at_10_std
value: -2.8863
- type: nauc_mrr_at_10_diff1
value: 31.465799999999998
- type: nauc_mrr_at_20_max
value: 22.3119
- type: nauc_mrr_at_20_std
value: -2.6858
- type: nauc_mrr_at_20_diff1
value: 31.446600000000004
- type: nauc_mrr_at_100_max
value: 22.3597
- type: nauc_mrr_at_100_std
value: -2.6425
- type: nauc_mrr_at_100_diff1
value: 31.4728
- type: nauc_mrr_at_1000_max
value: 22.3731
- type: nauc_mrr_at_1000_std
value: -2.6344
- type: nauc_mrr_at_1000_diff1
value: 31.489299999999997
- type: main_score
value: 31.391000000000002
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackPhysicsRetrieval (default)
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
split: test
type: mteb/cqadupstack-physics
metrics:
- type: ndcg_at_1
value: 38.690999999999995
- type: ndcg_at_3
value: 43.519000000000005
- type: ndcg_at_5
value: 45.862
- type: ndcg_at_10
value: 48.542
- type: ndcg_at_20
value: 50.40599999999999
- type: ndcg_at_100
value: 53.766000000000005
- type: ndcg_at_1000
value: 55.657000000000004
- type: map_at_1
value: 31.696
- type: map_at_3
value: 39.228
- type: map_at_5
value: 41.046
- type: map_at_10
value: 42.539
- type: map_at_20
value: 43.199
- type: map_at_100
value: 43.799
- type: map_at_1000
value: 43.902
- type: recall_at_1
value: 31.696
- type: recall_at_3
value: 46.482
- type: recall_at_5
value: 52.800999999999995
- type: recall_at_10
value: 60.650999999999996
- type: recall_at_20
value: 67.007
- type: recall_at_100
value: 82.669
- type: recall_at_1000
value: 95.02199999999999
- type: precision_at_1
value: 38.690999999999995
- type: precision_at_3
value: 20.404
- type: precision_at_5
value: 14.321
- type: precision_at_10
value: 8.709999999999999
- type: precision_at_20
value: 5.01
- type: precision_at_100
value: 1.315
- type: precision_at_1000
value: 0.165
- type: mrr_at_1
value: 38.690999999999995
- type: mrr_at_3
value: 45.684999999999995
- type: mrr_at_5
value: 47.1575
- type: mrr_at_10
value: 48.1562
- type: mrr_at_20
value: 48.582
- type: mrr_at_100
value: 48.9294
- type: mrr_at_1000
value: 48.968
- type: nauc_ndcg_at_1_max
value: 38.6678
- type: nauc_ndcg_at_1_std
value: -0.7451
- type: nauc_ndcg_at_1_diff1
value: 54.51089999999999
- type: nauc_ndcg_at_3_max
value: 38.5936
- type: nauc_ndcg_at_3_std
value: -1.185
- type: nauc_ndcg_at_3_diff1
value: 50.5312
- type: nauc_ndcg_at_5_max
value: 38.0602
- type: nauc_ndcg_at_5_std
value: -1.8034999999999999
- type: nauc_ndcg_at_5_diff1
value: 49.2837
- type: nauc_ndcg_at_10_max
value: 38.342
- type: nauc_ndcg_at_10_std
value: -0.9533
- type: nauc_ndcg_at_10_diff1
value: 49.0239
- type: nauc_ndcg_at_20_max
value: 39.2226
- type: nauc_ndcg_at_20_std
value: 0.6093999999999999
- type: nauc_ndcg_at_20_diff1
value: 48.7193
- type: nauc_ndcg_at_100_max
value: 39.3235
- type: nauc_ndcg_at_100_std
value: 2.3982
- type: nauc_ndcg_at_100_diff1
value: 48.5831
- type: nauc_ndcg_at_1000_max
value: 39.8333
- type: nauc_ndcg_at_1000_std
value: 2.4336
- type: nauc_ndcg_at_1000_diff1
value: 48.802099999999996
- type: nauc_map_at_1_max
value: 33.9405
- type: nauc_map_at_1_std
value: -3.9303999999999997
- type: nauc_map_at_1_diff1
value: 55.7491
- type: nauc_map_at_3_max
value: 36.550399999999996
- type: nauc_map_at_3_std
value: -2.7818
- type: nauc_map_at_3_diff1
value: 51.7018
- type: nauc_map_at_5_max
value: 36.999500000000005
- type: nauc_map_at_5_std
value: -2.7546999999999997
- type: nauc_map_at_5_diff1
value: 51.011300000000006
- type: nauc_map_at_10_max
value: 37.4157
- type: nauc_map_at_10_std
value: -1.9426999999999999
- type: nauc_map_at_10_diff1
value: 50.8876
- type: nauc_map_at_20_max
value: 37.729
- type: nauc_map_at_20_std
value: -1.3641999999999999
- type: nauc_map_at_20_diff1
value: 50.6926
- type: nauc_map_at_100_max
value: 37.7894
- type: nauc_map_at_100_std
value: -1.0082
- type: nauc_map_at_100_diff1
value: 50.6244
- type: nauc_map_at_1000_max
value: 37.8313
- type: nauc_map_at_1000_std
value: -0.9648
- type: nauc_map_at_1000_diff1
value: 50.6292
- type: nauc_recall_at_1_max
value: 33.9405
- type: nauc_recall_at_1_std
value: -3.9303999999999997
- type: nauc_recall_at_1_diff1
value: 55.7491
- type: nauc_recall_at_3_max
value: 35.6518
- type: nauc_recall_at_3_std
value: -3.166
- type: nauc_recall_at_3_diff1
value: 47.0684
- type: nauc_recall_at_5_max
value: 34.9043
- type: nauc_recall_at_5_std
value: -3.3676
- type: nauc_recall_at_5_diff1
value: 43.152499999999996
- type: nauc_recall_at_10_max
value: 35.2134
- type: nauc_recall_at_10_std
value: -1.0841
- type: nauc_recall_at_10_diff1
value: 41.1852
- type: nauc_recall_at_20_max
value: 37.417699999999996
- type: nauc_recall_at_20_std
value: 4.1923
- type: nauc_recall_at_20_diff1
value: 39.1819
- type: nauc_recall_at_100_max
value: 36.471900000000005
- type: nauc_recall_at_100_std
value: 19.8322
- type: nauc_recall_at_100_diff1
value: 34.0503
- type: nauc_recall_at_1000_max
value: 51.3256
- type: nauc_recall_at_1000_std
value: 46.2018
- type: nauc_recall_at_1000_diff1
value: 25.4702
- type: nauc_precision_at_1_max
value: 38.6678
- type: nauc_precision_at_1_std
value: -0.7451
- type: nauc_precision_at_1_diff1
value: 54.51089999999999
- type: nauc_precision_at_3_max
value: 39.763
- type: nauc_precision_at_3_std
value: 5.3316
- type: nauc_precision_at_3_diff1
value: 34.5965
- type: nauc_precision_at_5_max
value: 35.8709
- type: nauc_precision_at_5_std
value: 5.8021
- type: nauc_precision_at_5_diff1
value: 25.3427
- type: nauc_precision_at_10_max
value: 30.9008
- type: nauc_precision_at_10_std
value: 11.5405
- type: nauc_precision_at_10_diff1
value: 15.775
- type: nauc_precision_at_20_max
value: 28.403200000000002
- type: nauc_precision_at_20_std
value: 18.1899
- type: nauc_precision_at_20_diff1
value: 6.8557999999999995
- type: nauc_precision_at_100_max
value: 15.776499999999999
- type: nauc_precision_at_100_std
value: 21.5746
- type: nauc_precision_at_100_diff1
value: -7.0051000000000005
- type: nauc_precision_at_1000_max
value: 6.2587
- type: nauc_precision_at_1000_std
value: 18.0076
- type: nauc_precision_at_1000_diff1
value: -17.366400000000002
- type: nauc_mrr_at_1_max
value: 38.6678
- type: nauc_mrr_at_1_std
value: -0.7451
- type: nauc_mrr_at_1_diff1
value: 54.51089999999999
- type: nauc_mrr_at_3_max
value: 40.489399999999996
- type: nauc_mrr_at_3_std
value: -0.3225
- type: nauc_mrr_at_3_diff1
value: 51.41480000000001
- type: nauc_mrr_at_5_max
value: 40.1627
- type: nauc_mrr_at_5_std
value: -0.16219999999999998
- type: nauc_mrr_at_5_diff1
value: 50.560300000000005
- type: nauc_mrr_at_10_max
value: 40.125899999999994
- type: nauc_mrr_at_10_std
value: 0.0545
- type: nauc_mrr_at_10_diff1
value: 50.3771
- type: nauc_mrr_at_20_max
value: 40.2183
- type: nauc_mrr_at_20_std
value: 0.2818
- type: nauc_mrr_at_20_diff1
value: 50.387
- type: nauc_mrr_at_100_max
value: 40.201100000000004
- type: nauc_mrr_at_100_std
value: 0.43350000000000005
- type: nauc_mrr_at_100_diff1
value: 50.395100000000006
- type: nauc_mrr_at_1000_max
value: 40.2026
- type: nauc_mrr_at_1000_std
value: 0.42129999999999995
- type: nauc_mrr_at_1000_diff1
value: 50.405199999999994
- type: main_score
value: 48.542
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackProgrammersRetrieval (default)
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
split: test
type: mteb/cqadupstack-programmers
metrics:
- type: ndcg_at_1
value: 33.333
- type: ndcg_at_3
value: 39.431
- type: ndcg_at_5
value: 42.120000000000005
- type: ndcg_at_10
value: 44.968
- type: ndcg_at_20
value: 47.099000000000004
- type: ndcg_at_100
value: 50.288
- type: ndcg_at_1000
value: 52.371
- type: map_at_1
value: 27.087
- type: map_at_3
value: 35.203
- type: map_at_5
value: 37.230999999999995
- type: map_at_10
value: 38.693
- type: map_at_20
value: 39.425
- type: map_at_100
value: 40.001
- type: map_at_1000
value: 40.119
- type: recall_at_1
value: 27.087
- type: recall_at_3
value: 42.846000000000004
- type: recall_at_5
value: 49.846000000000004
- type: recall_at_10
value: 58.083
- type: recall_at_20
value: 65.615
- type: recall_at_100
value: 80.831
- type: recall_at_1000
value: 94.474
- type: precision_at_1
value: 33.333
- type: precision_at_3
value: 19.139999999999997
- type: precision_at_5
value: 13.858
- type: precision_at_10
value: 8.413
- type: precision_at_20
value: 4.926
- type: precision_at_100
value: 1.275
- type: precision_at_1000
value: 0.165
- type: mrr_at_1
value: 33.3333
- type: mrr_at_3
value: 41.0959
- type: mrr_at_5
value: 42.6826
- type: mrr_at_10
value: 43.819900000000004
- type: mrr_at_20
value: 44.3087
- type: mrr_at_100
value: 44.6693
- type: mrr_at_1000
value: 44.7164
- type: nauc_ndcg_at_1_max
value: 36.037
- type: nauc_ndcg_at_1_std
value: -0.2425
- type: nauc_ndcg_at_1_diff1
value: 46.9443
- type: nauc_ndcg_at_3_max
value: 33.5311
- type: nauc_ndcg_at_3_std
value: 1.2205000000000001
- type: nauc_ndcg_at_3_diff1
value: 38.8166
- type: nauc_ndcg_at_5_max
value: 34.3091
- type: nauc_ndcg_at_5_std
value: 2.8846
- type: nauc_ndcg_at_5_diff1
value: 38.222899999999996
- type: nauc_ndcg_at_10_max
value: 34.443400000000004
- type: nauc_ndcg_at_10_std
value: 3.5393
- type: nauc_ndcg_at_10_diff1
value: 37.9537
- type: nauc_ndcg_at_20_max
value: 34.929500000000004
- type: nauc_ndcg_at_20_std
value: 4.4444
- type: nauc_ndcg_at_20_diff1
value: 37.811099999999996
- type: nauc_ndcg_at_100_max
value: 35.6285
- type: nauc_ndcg_at_100_std
value: 6.356199999999999
- type: nauc_ndcg_at_100_diff1
value: 37.4749
- type: nauc_ndcg_at_1000_max
value: 35.8451
- type: nauc_ndcg_at_1000_std
value: 6.1044
- type: nauc_ndcg_at_1000_diff1
value: 38.5065
- type: nauc_map_at_1_max
value: 30.017100000000003
- type: nauc_map_at_1_std
value: -5.056299999999999
- type: nauc_map_at_1_diff1
value: 46.4338
- type: nauc_map_at_3_max
value: 31.936999999999998
- type: nauc_map_at_3_std
value: -1.0591
- type: nauc_map_at_3_diff1
value: 39.8778
- type: nauc_map_at_5_max
value: 32.859100000000005
- type: nauc_map_at_5_std
value: 0.42050000000000004
- type: nauc_map_at_5_diff1
value: 39.7368
- type: nauc_map_at_10_max
value: 33.042899999999996
- type: nauc_map_at_10_std
value: 0.8545
- type: nauc_map_at_10_diff1
value: 39.5713
- type: nauc_map_at_20_max
value: 33.3227
- type: nauc_map_at_20_std
value: 1.3109000000000002
- type: nauc_map_at_20_diff1
value: 39.5833
- type: nauc_map_at_100_max
value: 33.537
- type: nauc_map_at_100_std
value: 1.7505
- type: nauc_map_at_100_diff1
value: 39.6109
- type: nauc_map_at_1000_max
value: 33.578
- type: nauc_map_at_1000_std
value: 1.7679
- type: nauc_map_at_1000_diff1
value: 39.677299999999995
- type: nauc_recall_at_1_max
value: 30.017100000000003
- type: nauc_recall_at_1_std
value: -5.056299999999999
- type: nauc_recall_at_1_diff1
value: 46.4338
- type: nauc_recall_at_3_max
value: 31.3062
- type: nauc_recall_at_3_std
value: 1.6736
- type: nauc_recall_at_3_diff1
value: 32.743
- type: nauc_recall_at_5_max
value: 32.7338
- type: nauc_recall_at_5_std
value: 5.9388000000000005
- type: nauc_recall_at_5_diff1
value: 30.8784
- type: nauc_recall_at_10_max
value: 32.9312
- type: nauc_recall_at_10_std
value: 8.1993
- type: nauc_recall_at_10_diff1
value: 29.4248
- type: nauc_recall_at_20_max
value: 33.9206
- type: nauc_recall_at_20_std
value: 10.673
- type: nauc_recall_at_20_diff1
value: 27.377200000000002
- type: nauc_recall_at_100_max
value: 37.119
- type: nauc_recall_at_100_std
value: 24.6249
- type: nauc_recall_at_100_diff1
value: 19.403699999999997
- type: nauc_recall_at_1000_max
value: 52.2307
- type: nauc_recall_at_1000_std
value: 53.405199999999994
- type: nauc_recall_at_1000_diff1
value: 24.122799999999998
- type: nauc_precision_at_1_max
value: 36.037
- type: nauc_precision_at_1_std
value: -0.2425
- type: nauc_precision_at_1_diff1
value: 46.9443
- type: nauc_precision_at_3_max
value: 34.110600000000005
- type: nauc_precision_at_3_std
value: 8.7398
- type: nauc_precision_at_3_diff1
value: 27.441
- type: nauc_precision_at_5_max
value: 33.0042
- type: nauc_precision_at_5_std
value: 13.7932
- type: nauc_precision_at_5_diff1
value: 23.011300000000002
- type: nauc_precision_at_10_max
value: 28.8408
- type: nauc_precision_at_10_std
value: 14.4897
- type: nauc_precision_at_10_diff1
value: 18.0244
- type: nauc_precision_at_20_max
value: 25.5054
- type: nauc_precision_at_20_std
value: 16.5918
- type: nauc_precision_at_20_diff1
value: 14.665500000000002
- type: nauc_precision_at_100_max
value: 18.084400000000002
- type: nauc_precision_at_100_std
value: 20.7595
- type: nauc_precision_at_100_diff1
value: 6.2877
- type: nauc_precision_at_1000_max
value: 6.778099999999999
- type: nauc_precision_at_1000_std
value: 9.0734
- type: nauc_precision_at_1000_diff1
value: 5.6030999999999995
- type: nauc_mrr_at_1_max
value: 36.037
- type: nauc_mrr_at_1_std
value: -0.2425
- type: nauc_mrr_at_1_diff1
value: 46.9443
- type: nauc_mrr_at_3_max
value: 36.0423
- type: nauc_mrr_at_3_std
value: 3.0699
- type: nauc_mrr_at_3_diff1
value: 40.6527
- type: nauc_mrr_at_5_max
value: 36.3279
- type: nauc_mrr_at_5_std
value: 4.0948
- type: nauc_mrr_at_5_diff1
value: 40.1667
- type: nauc_mrr_at_10_max
value: 36.3884
- type: nauc_mrr_at_10_std
value: 4.5214
- type: nauc_mrr_at_10_diff1
value: 40.3499
- type: nauc_mrr_at_20_max
value: 36.3977
- type: nauc_mrr_at_20_std
value: 4.4357
- type: nauc_mrr_at_20_diff1
value: 40.342800000000004
- type: nauc_mrr_at_100_max
value: 36.422900000000006
- type: nauc_mrr_at_100_std
value: 4.501200000000001
- type: nauc_mrr_at_100_diff1
value: 40.3487
- type: nauc_mrr_at_1000_max
value: 36.4317
- type: nauc_mrr_at_1000_std
value: 4.4942
- type: nauc_mrr_at_1000_diff1
value: 40.3843
- type: main_score
value: 44.968
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackRetrieval (default)
revision: CQADupstackRetrieval_is_a_combined_dataset
split: test
type: CQADupstackRetrieval_is_a_combined_dataset
metrics:
- type: main_score
value: 42.51058333333334
- type: ndcg_at_10
value: 42.51058333333334
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackStatsRetrieval (default)
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
split: test
type: mteb/cqadupstack-stats
metrics:
- type: ndcg_at_1
value: 28.066999999999997
- type: ndcg_at_3
value: 33.326
- type: ndcg_at_5
value: 35.432
- type: ndcg_at_10
value: 37.711
- type: ndcg_at_20
value: 39.377
- type: ndcg_at_100
value: 42.437999999999995
- type: ndcg_at_1000
value: 44.653999999999996
- type: map_at_1
value: 24.91
- type: map_at_3
value: 30.641000000000002
- type: map_at_5
value: 32.003
- type: map_at_10
value: 33.027
- type: map_at_20
value: 33.52
- type: map_at_100
value: 33.958
- type: map_at_1000
value: 34.048
- type: recall_at_1
value: 24.91
- type: recall_at_3
value: 36.931000000000004
- type: recall_at_5
value: 42.257
- type: recall_at_10
value: 49.248
- type: recall_at_20
value: 55.504
- type: recall_at_100
value: 71.086
- type: recall_at_1000
value: 87.209
- type: precision_at_1
value: 28.066999999999997
- type: precision_at_3
value: 14.571000000000002
- type: precision_at_5
value: 10.152999999999999
- type: precision_at_10
value: 5.982
- type: precision_at_20
value: 3.405
- type: precision_at_100
value: 0.903
- type: precision_at_1000
value: 0.11800000000000001
- type: mrr_at_1
value: 28.067500000000003
- type: mrr_at_3
value: 33.8957
- type: mrr_at_5
value: 35.0997
- type: mrr_at_10
value: 36.0272
- type: mrr_at_20
value: 36.4454
- type: mrr_at_100
value: 36.8325
- type: mrr_at_1000
value: 36.8906
- type: nauc_ndcg_at_1_max
value: 41.64
- type: nauc_ndcg_at_1_std
value: -3.0991999999999997
- type: nauc_ndcg_at_1_diff1
value: 52.059
- type: nauc_ndcg_at_3_max
value: 38.3407
- type: nauc_ndcg_at_3_std
value: -2.0187
- type: nauc_ndcg_at_3_diff1
value: 44.6053
- type: nauc_ndcg_at_5_max
value: 39.5482
- type: nauc_ndcg_at_5_std
value: 0.6605
- type: nauc_ndcg_at_5_diff1
value: 44.1187
- type: nauc_ndcg_at_10_max
value: 40.2625
- type: nauc_ndcg_at_10_std
value: 1.6514999999999997
- type: nauc_ndcg_at_10_diff1
value: 43.170500000000004
- type: nauc_ndcg_at_20_max
value: 40.067
- type: nauc_ndcg_at_20_std
value: 2.1887
- type: nauc_ndcg_at_20_diff1
value: 42.8359
- type: nauc_ndcg_at_100_max
value: 41.749900000000004
- type: nauc_ndcg_at_100_std
value: 4.3462
- type: nauc_ndcg_at_100_diff1
value: 42.1422
- type: nauc_ndcg_at_1000_max
value: 41.4899
- type: nauc_ndcg_at_1000_std
value: 3.9956
- type: nauc_ndcg_at_1000_diff1
value: 42.4235
- type: nauc_map_at_1_max
value: 39.1049
- type: nauc_map_at_1_std
value: -7.072000000000001
- type: nauc_map_at_1_diff1
value: 53.76840000000001
- type: nauc_map_at_3_max
value: 38.3832
- type: nauc_map_at_3_std
value: -4.0869
- type: nauc_map_at_3_diff1
value: 46.848600000000005
- type: nauc_map_at_5_max
value: 39.4646
- type: nauc_map_at_5_std
value: -2.0288
- type: nauc_map_at_5_diff1
value: 46.3888
- type: nauc_map_at_10_max
value: 39.8593
- type: nauc_map_at_10_std
value: -1.4203000000000001
- type: nauc_map_at_10_diff1
value: 45.9306
- type: nauc_map_at_20_max
value: 39.835300000000004
- type: nauc_map_at_20_std
value: -1.2231
- type: nauc_map_at_20_diff1
value: 45.8283
- type: nauc_map_at_100_max
value: 40.1343
- type: nauc_map_at_100_std
value: -0.9245
- type: nauc_map_at_100_diff1
value: 45.7762
- type: nauc_map_at_1000_max
value: 40.1356
- type: nauc_map_at_1000_std
value: -0.9329000000000001
- type: nauc_map_at_1000_diff1
value: 45.785
- type: nauc_recall_at_1_max
value: 39.1049
- type: nauc_recall_at_1_std
value: -7.072000000000001
- type: nauc_recall_at_1_diff1
value: 53.76840000000001
- type: nauc_recall_at_3_max
value: 34.5115
- type: nauc_recall_at_3_std
value: -1.5186
- type: nauc_recall_at_3_diff1
value: 39.2881
- type: nauc_recall_at_5_max
value: 36.8705
- type: nauc_recall_at_5_std
value: 5.2115
- type: nauc_recall_at_5_diff1
value: 37.2112
- type: nauc_recall_at_10_max
value: 38.9486
- type: nauc_recall_at_10_std
value: 8.558
- type: nauc_recall_at_10_diff1
value: 34.027499999999996
- type: nauc_recall_at_20_max
value: 37.4174
- type: nauc_recall_at_20_std
value: 10.7121
- type: nauc_recall_at_20_diff1
value: 31.6372
- type: nauc_recall_at_100_max
value: 45.7135
- type: nauc_recall_at_100_std
value: 26.958900000000003
- type: nauc_recall_at_100_diff1
value: 22.6293
- type: nauc_recall_at_1000_max
value: 45.8455
- type: nauc_recall_at_1000_std
value: 41.8128
- type: nauc_recall_at_1000_diff1
value: 11.1735
- type: nauc_precision_at_1_max
value: 41.64
- type: nauc_precision_at_1_std
value: -3.0991999999999997
- type: nauc_precision_at_1_diff1
value: 52.059
- type: nauc_precision_at_3_max
value: 37.5109
- type: nauc_precision_at_3_std
value: 4.5869
- type: nauc_precision_at_3_diff1
value: 35.604200000000006
- type: nauc_precision_at_5_max
value: 39.441500000000005
- type: nauc_precision_at_5_std
value: 12.413499999999999
- type: nauc_precision_at_5_diff1
value: 31.566699999999997
- type: nauc_precision_at_10_max
value: 39.3943
- type: nauc_precision_at_10_std
value: 14.4375
- type: nauc_precision_at_10_diff1
value: 26.4044
- type: nauc_precision_at_20_max
value: 34.6082
- type: nauc_precision_at_20_std
value: 15.573899999999998
- type: nauc_precision_at_20_diff1
value: 21.3312
- type: nauc_precision_at_100_max
value: 33.6787
- type: nauc_precision_at_100_std
value: 24.4628
- type: nauc_precision_at_100_diff1
value: 9.238399999999999
- type: nauc_precision_at_1000_max
value: 15.7002
- type: nauc_precision_at_1000_std
value: 17.6244
- type: nauc_precision_at_1000_diff1
value: -2.8333
- type: nauc_mrr_at_1_max
value: 41.64
- type: nauc_mrr_at_1_std
value: -3.0991999999999997
- type: nauc_mrr_at_1_diff1
value: 52.059
- type: nauc_mrr_at_3_max
value: 40.2887
- type: nauc_mrr_at_3_std
value: -0.48650000000000004
- type: nauc_mrr_at_3_diff1
value: 46.2812
- type: nauc_mrr_at_5_max
value: 40.792899999999996
- type: nauc_mrr_at_5_std
value: 0.7635000000000001
- type: nauc_mrr_at_5_diff1
value: 45.8179
- type: nauc_mrr_at_10_max
value: 40.970099999999995
- type: nauc_mrr_at_10_std
value: 0.9508000000000001
- type: nauc_mrr_at_10_diff1
value: 45.4065
- type: nauc_mrr_at_20_max
value: 40.9322
- type: nauc_mrr_at_20_std
value: 1.0284
- type: nauc_mrr_at_20_diff1
value: 45.440999999999995
- type: nauc_mrr_at_100_max
value: 41.1209
- type: nauc_mrr_at_100_std
value: 1.2597
- type: nauc_mrr_at_100_diff1
value: 45.3654
- type: nauc_mrr_at_1000_max
value: 41.1143
- type: nauc_mrr_at_1000_std
value: 1.2467000000000001
- type: nauc_mrr_at_1000_diff1
value: 45.3792
- type: main_score
value: 37.711
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackTexRetrieval (default)
revision: 46989137a86843e03a6195de44b09deda022eec7
split: test
type: mteb/cqadupstack-tex
metrics:
- type: ndcg_at_1
value: 21.37
- type: ndcg_at_3
value: 25.308000000000003
- type: ndcg_at_5
value: 27.211999999999996
- type: ndcg_at_10
value: 29.759999999999998
- type: ndcg_at_20
value: 31.806
- type: ndcg_at_100
value: 35.148
- type: ndcg_at_1000
value: 38.115
- type: map_at_1
value: 17.635
- type: map_at_3
value: 22.537
- type: map_at_5
value: 23.834
- type: map_at_10
value: 24.984
- type: map_at_20
value: 25.613999999999997
- type: map_at_100
value: 26.125
- type: map_at_1000
value: 26.256
- type: recall_at_1
value: 17.635
- type: recall_at_3
value: 27.759
- type: recall_at_5
value: 32.688
- type: recall_at_10
value: 40.326
- type: recall_at_20
value: 47.865
- type: recall_at_100
value: 64.43799999999999
- type: recall_at_1000
value: 85.589
- type: precision_at_1
value: 21.37
- type: precision_at_3
value: 11.928999999999998
- type: precision_at_5
value: 8.679
- type: precision_at_10
value: 5.502
- type: precision_at_20
value: 3.345
- type: precision_at_100
value: 0.962
- type: precision_at_1000
value: 0.13899999999999998
- type: mrr_at_1
value: 21.3696
- type: mrr_at_3
value: 26.4854
- type: mrr_at_5
value: 27.726
- type: mrr_at_10
value: 28.842499999999998
- type: mrr_at_20
value: 29.3902
- type: mrr_at_100
value: 29.7846
- type: mrr_at_1000
value: 29.860799999999998
- type: nauc_ndcg_at_1_max
value: 31.770300000000002
- type: nauc_ndcg_at_1_std
value: -4.784999999999999
- type: nauc_ndcg_at_1_diff1
value: 42.290499999999994
- type: nauc_ndcg_at_3_max
value: 31.1434
- type: nauc_ndcg_at_3_std
value: -2.8424
- type: nauc_ndcg_at_3_diff1
value: 36.7329
- type: nauc_ndcg_at_5_max
value: 31.1525
- type: nauc_ndcg_at_5_std
value: -2.2824
- type: nauc_ndcg_at_5_diff1
value: 35.517199999999995
- type: nauc_ndcg_at_10_max
value: 31.3549
- type: nauc_ndcg_at_10_std
value: -1.089
- type: nauc_ndcg_at_10_diff1
value: 34.9647
- type: nauc_ndcg_at_20_max
value: 31.3283
- type: nauc_ndcg_at_20_std
value: -0.5032
- type: nauc_ndcg_at_20_diff1
value: 34.73
- type: nauc_ndcg_at_100_max
value: 31.3324
- type: nauc_ndcg_at_100_std
value: 0.8308
- type: nauc_ndcg_at_100_diff1
value: 34.0739
- type: nauc_ndcg_at_1000_max
value: 31.563799999999997
- type: nauc_ndcg_at_1000_std
value: 1.0345
- type: nauc_ndcg_at_1000_diff1
value: 34.321400000000004
- type: nauc_map_at_1_max
value: 29.935299999999998
- type: nauc_map_at_1_std
value: -4.6685
- type: nauc_map_at_1_diff1
value: 43.6434
- type: nauc_map_at_3_max
value: 30.476
- type: nauc_map_at_3_std
value: -3.3331
- type: nauc_map_at_3_diff1
value: 38.6884
- type: nauc_map_at_5_max
value: 30.625200000000003
- type: nauc_map_at_5_std
value: -3.0722
- type: nauc_map_at_5_diff1
value: 37.845
- type: nauc_map_at_10_max
value: 30.8581
- type: nauc_map_at_10_std
value: -2.5201000000000002
- type: nauc_map_at_10_diff1
value: 37.5217
- type: nauc_map_at_20_max
value: 30.9267
- type: nauc_map_at_20_std
value: -2.3167
- type: nauc_map_at_20_diff1
value: 37.4216
- type: nauc_map_at_100_max
value: 31.0064
- type: nauc_map_at_100_std
value: -2.0629999999999997
- type: nauc_map_at_100_diff1
value: 37.3075
- type: nauc_map_at_1000_max
value: 31.0478
- type: nauc_map_at_1000_std
value: -2.0301
- type: nauc_map_at_1000_diff1
value: 37.3077
- type: nauc_recall_at_1_max
value: 29.935299999999998
- type: nauc_recall_at_1_std
value: -4.6685
- type: nauc_recall_at_1_diff1
value: 43.6434
- type: nauc_recall_at_3_max
value: 29.2327
- type: nauc_recall_at_3_std
value: -1.8466
- type: nauc_recall_at_3_diff1
value: 32.5214
- type: nauc_recall_at_5_max
value: 28.8576
- type: nauc_recall_at_5_std
value: -0.8358000000000001
- type: nauc_recall_at_5_diff1
value: 29.329499999999996
- type: nauc_recall_at_10_max
value: 28.8851
- type: nauc_recall_at_10_std
value: 2.3084000000000002
- type: nauc_recall_at_10_diff1
value: 27.3001
- type: nauc_recall_at_20_max
value: 28.0772
- type: nauc_recall_at_20_std
value: 4.2632
- type: nauc_recall_at_20_diff1
value: 25.6873
- type: nauc_recall_at_100_max
value: 27.4461
- type: nauc_recall_at_100_std
value: 11.9175
- type: nauc_recall_at_100_diff1
value: 20.7784
- type: nauc_recall_at_1000_max
value: 27.1262
- type: nauc_recall_at_1000_std
value: 24.4024
- type: nauc_recall_at_1000_diff1
value: 14.5445
- type: nauc_precision_at_1_max
value: 31.770300000000002
- type: nauc_precision_at_1_std
value: -4.784999999999999
- type: nauc_precision_at_1_diff1
value: 42.290499999999994
- type: nauc_precision_at_3_max
value: 32.5608
- type: nauc_precision_at_3_std
value: -1.3823999999999999
- type: nauc_precision_at_3_diff1
value: 30.9278
- type: nauc_precision_at_5_max
value: 32.0685
- type: nauc_precision_at_5_std
value: -0.2231
- type: nauc_precision_at_5_diff1
value: 26.8139
- type: nauc_precision_at_10_max
value: 31.8615
- type: nauc_precision_at_10_std
value: 3.3291
- type: nauc_precision_at_10_diff1
value: 22.608800000000002
- type: nauc_precision_at_20_max
value: 30.250799999999998
- type: nauc_precision_at_20_std
value: 5.242
- type: nauc_precision_at_20_diff1
value: 19.532
- type: nauc_precision_at_100_max
value: 25.2481
- type: nauc_precision_at_100_std
value: 9.711599999999999
- type: nauc_precision_at_100_diff1
value: 9.5108
- type: nauc_precision_at_1000_max
value: 19.072
- type: nauc_precision_at_1000_std
value: 9.0718
- type: nauc_precision_at_1000_diff1
value: -0.21090000000000003
- type: nauc_mrr_at_1_max
value: 31.770300000000002
- type: nauc_mrr_at_1_std
value: -4.784999999999999
- type: nauc_mrr_at_1_diff1
value: 42.290499999999994
- type: nauc_mrr_at_3_max
value: 31.5869
- type: nauc_mrr_at_3_std
value: -3.2058999999999997
- type: nauc_mrr_at_3_diff1
value: 37.3799
- type: nauc_mrr_at_5_max
value: 31.675199999999997
- type: nauc_mrr_at_5_std
value: -2.7127
- type: nauc_mrr_at_5_diff1
value: 36.5429
- type: nauc_mrr_at_10_max
value: 31.7662
- type: nauc_mrr_at_10_std
value: -2.314
- type: nauc_mrr_at_10_diff1
value: 36.3532
- type: nauc_mrr_at_20_max
value: 31.771300000000004
- type: nauc_mrr_at_20_std
value: -2.1448
- type: nauc_mrr_at_20_diff1
value: 36.3367
- type: nauc_mrr_at_100_max
value: 31.767899999999997
- type: nauc_mrr_at_100_std
value: -2.0333
- type: nauc_mrr_at_100_diff1
value: 36.2815
- type: nauc_mrr_at_1000_max
value: 31.7795
- type: nauc_mrr_at_1000_std
value: -2.0261
- type: nauc_mrr_at_1000_diff1
value: 36.2999
- type: main_score
value: 29.759999999999998
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackUnixRetrieval (default)
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
split: test
type: mteb/cqadupstack-unix
metrics:
- type: ndcg_at_1
value: 33.302
- type: ndcg_at_3
value: 38.403
- type: ndcg_at_5
value: 40.319
- type: ndcg_at_10
value: 42.834
- type: ndcg_at_20
value: 44.932
- type: ndcg_at_100
value: 47.833
- type: ndcg_at_1000
value: 50.157
- type: map_at_1
value: 28.457
- type: map_at_3
value: 35.184
- type: map_at_5
value: 36.532
- type: map_at_10
value: 37.714
- type: map_at_20
value: 38.340999999999994
- type: map_at_100
value: 38.797
- type: map_at_1000
value: 38.903999999999996
- type: recall_at_1
value: 28.457
- type: recall_at_3
value: 41.937999999999995
- type: recall_at_5
value: 46.911
- type: recall_at_10
value: 54.303000000000004
- type: recall_at_20
value: 61.906000000000006
- type: recall_at_100
value: 76.074
- type: recall_at_1000
value: 92.191
- type: precision_at_1
value: 33.302
- type: precision_at_3
value: 17.382
- type: precision_at_5
value: 11.922
- type: precision_at_10
value: 7.08
- type: precision_at_20
value: 4.137
- type: precision_at_100
value: 1.064
- type: precision_at_1000
value: 0.13799999999999998
- type: mrr_at_1
value: 33.3022
- type: mrr_at_3
value: 39.5056
- type: mrr_at_5
value: 40.7276
- type: mrr_at_10
value: 41.7227
- type: mrr_at_20
value: 42.270799999999994
- type: mrr_at_100
value: 42.5991
- type: mrr_at_1000
value: 42.653999999999996
- type: nauc_ndcg_at_1_max
value: 41.5343
- type: nauc_ndcg_at_1_std
value: -2.8242
- type: nauc_ndcg_at_1_diff1
value: 55.388099999999994
- type: nauc_ndcg_at_3_max
value: 41.531800000000004
- type: nauc_ndcg_at_3_std
value: -0.0958
- type: nauc_ndcg_at_3_diff1
value: 50.5951
- type: nauc_ndcg_at_5_max
value: 41.0756
- type: nauc_ndcg_at_5_std
value: 0.7116
- type: nauc_ndcg_at_5_diff1
value: 49.0397
- type: nauc_ndcg_at_10_max
value: 40.5656
- type: nauc_ndcg_at_10_std
value: 1.2275
- type: nauc_ndcg_at_10_diff1
value: 48.1935
- type: nauc_ndcg_at_20_max
value: 39.967000000000006
- type: nauc_ndcg_at_20_std
value: 1.2213
- type: nauc_ndcg_at_20_diff1
value: 47.5459
- type: nauc_ndcg_at_100_max
value: 40.2487
- type: nauc_ndcg_at_100_std
value: 2.6310000000000002
- type: nauc_ndcg_at_100_diff1
value: 47.3499
- type: nauc_ndcg_at_1000_max
value: 40.802
- type: nauc_ndcg_at_1000_std
value: 2.9029
- type: nauc_ndcg_at_1000_diff1
value: 47.893299999999996
- type: nauc_map_at_1_max
value: 40.0689
- type: nauc_map_at_1_std
value: -3.2761
- type: nauc_map_at_1_diff1
value: 56.685399999999994
- type: nauc_map_at_3_max
value: 41.350500000000004
- type: nauc_map_at_3_std
value: -0.6871999999999999
- type: nauc_map_at_3_diff1
value: 52.737100000000005
- type: nauc_map_at_5_max
value: 41.1119
- type: nauc_map_at_5_std
value: -0.23340000000000002
- type: nauc_map_at_5_diff1
value: 51.5269
- type: nauc_map_at_10_max
value: 40.860400000000006
- type: nauc_map_at_10_std
value: -0.08760000000000001
- type: nauc_map_at_10_diff1
value: 51.01369999999999
- type: nauc_map_at_20_max
value: 40.5859
- type: nauc_map_at_20_std
value: -0.154
- type: nauc_map_at_20_diff1
value: 50.744699999999995
- type: nauc_map_at_100_max
value: 40.646300000000004
- type: nauc_map_at_100_std
value: 0.10189999999999999
- type: nauc_map_at_100_diff1
value: 50.7085
- type: nauc_map_at_1000_max
value: 40.6731
- type: nauc_map_at_1000_std
value: 0.1394
- type: nauc_map_at_1000_diff1
value: 50.708
- type: nauc_recall_at_1_max
value: 40.0689
- type: nauc_recall_at_1_std
value: -3.2761
- type: nauc_recall_at_1_diff1
value: 56.685399999999994
- type: nauc_recall_at_3_max
value: 40.5338
- type: nauc_recall_at_3_std
value: 1.4996
- type: nauc_recall_at_3_diff1
value: 46.9882
- type: nauc_recall_at_5_max
value: 39.745999999999995
- type: nauc_recall_at_5_std
value: 3.7415
- type: nauc_recall_at_5_diff1
value: 42.7628
- type: nauc_recall_at_10_max
value: 37.6122
- type: nauc_recall_at_10_std
value: 5.1345
- type: nauc_recall_at_10_diff1
value: 39.2683
- type: nauc_recall_at_20_max
value: 34.9745
- type: nauc_recall_at_20_std
value: 5.7971
- type: nauc_recall_at_20_diff1
value: 35.6486
- type: nauc_recall_at_100_max
value: 35.1278
- type: nauc_recall_at_100_std
value: 16.569
- type: nauc_recall_at_100_diff1
value: 30.4082
- type: nauc_recall_at_1000_max
value: 48.1561
- type: nauc_recall_at_1000_std
value: 46.2123
- type: nauc_recall_at_1000_diff1
value: 28.9314
- type: nauc_precision_at_1_max
value: 41.5343
- type: nauc_precision_at_1_std
value: -2.8242
- type: nauc_precision_at_1_diff1
value: 55.388099999999994
- type: nauc_precision_at_3_max
value: 37.9897
- type: nauc_precision_at_3_std
value: 2.563
- type: nauc_precision_at_3_diff1
value: 37.253
- type: nauc_precision_at_5_max
value: 33.9735
- type: nauc_precision_at_5_std
value: 3.5601000000000003
- type: nauc_precision_at_5_diff1
value: 29.017300000000002
- type: nauc_precision_at_10_max
value: 27.8221
- type: nauc_precision_at_10_std
value: 4.3591999999999995
- type: nauc_precision_at_10_diff1
value: 20.7948
- type: nauc_precision_at_20_max
value: 21.0119
- type: nauc_precision_at_20_std
value: 4.4604
- type: nauc_precision_at_20_diff1
value: 12.5115
- type: nauc_precision_at_100_max
value: 11.1615
- type: nauc_precision_at_100_std
value: 10.1361
- type: nauc_precision_at_100_diff1
value: -2.5748
- type: nauc_precision_at_1000_max
value: -3.5173
- type: nauc_precision_at_1000_std
value: 6.248
- type: nauc_precision_at_1000_diff1
value: -17.6147
- type: nauc_mrr_at_1_max
value: 41.5343
- type: nauc_mrr_at_1_std
value: -2.8242
- type: nauc_mrr_at_1_diff1
value: 55.388099999999994
- type: nauc_mrr_at_3_max
value: 41.599199999999996
- type: nauc_mrr_at_3_std
value: -0.5716
- type: nauc_mrr_at_3_diff1
value: 50.932100000000005
- type: nauc_mrr_at_5_max
value: 41.2312
- type: nauc_mrr_at_5_std
value: -0.2443
- type: nauc_mrr_at_5_diff1
value: 49.9174
- type: nauc_mrr_at_10_max
value: 41.0053
- type: nauc_mrr_at_10_std
value: 0.0628
- type: nauc_mrr_at_10_diff1
value: 49.6375
- type: nauc_mrr_at_20_max
value: 40.930499999999995
- type: nauc_mrr_at_20_std
value: -0.063
- type: nauc_mrr_at_20_diff1
value: 49.6391
- type: nauc_mrr_at_100_max
value: 40.9473
- type: nauc_mrr_at_100_std
value: 0.0646
- type: nauc_mrr_at_100_diff1
value: 49.6701
- type: nauc_mrr_at_1000_max
value: 40.9676
- type: nauc_mrr_at_1000_std
value: 0.0838
- type: nauc_mrr_at_1000_diff1
value: 49.695299999999996
- type: main_score
value: 42.834
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackWebmastersRetrieval (default)
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
split: test
type: mteb/cqadupstack-webmasters
metrics:
- type: ndcg_at_1
value: 32.411
- type: ndcg_at_3
value: 37.911
- type: ndcg_at_5
value: 39.983000000000004
- type: ndcg_at_10
value: 42.321999999999996
- type: ndcg_at_20
value: 44.855000000000004
- type: ndcg_at_100
value: 48.515
- type: ndcg_at_1000
value: 50.845
- type: map_at_1
value: 27.062
- type: map_at_3
value: 33.689
- type: map_at_5
value: 35.161
- type: map_at_10
value: 36.492000000000004
- type: map_at_20
value: 37.486999999999995
- type: map_at_100
value: 38.235
- type: map_at_1000
value: 38.421
- type: recall_at_1
value: 27.062
- type: recall_at_3
value: 40.459
- type: recall_at_5
value: 46.221000000000004
- type: recall_at_10
value: 53.348
- type: recall_at_20
value: 62.852
- type: recall_at_100
value: 80.582
- type: recall_at_1000
value: 95.14099999999999
- type: precision_at_1
value: 32.411
- type: precision_at_3
value: 17.984
- type: precision_at_5
value: 12.767000000000001
- type: precision_at_10
value: 7.945
- type: precision_at_20
value: 5.0
- type: precision_at_100
value: 1.534
- type: precision_at_1000
value: 0.234
- type: mrr_at_1
value: 32.4111
- type: mrr_at_3
value: 38.8011
- type: mrr_at_5
value: 40.2437
- type: mrr_at_10
value: 41.1494
- type: mrr_at_20
value: 41.8962
- type: mrr_at_100
value: 42.275800000000004
- type: mrr_at_1000
value: 42.3273
- type: nauc_ndcg_at_1_max
value: 27.961799999999997
- type: nauc_ndcg_at_1_std
value: 1.9207999999999998
- type: nauc_ndcg_at_1_diff1
value: 47.9837
- type: nauc_ndcg_at_3_max
value: 28.009099999999997
- type: nauc_ndcg_at_3_std
value: 1.212
- type: nauc_ndcg_at_3_diff1
value: 42.1361
- type: nauc_ndcg_at_5_max
value: 27.304299999999998
- type: nauc_ndcg_at_5_std
value: 1.4559
- type: nauc_ndcg_at_5_diff1
value: 40.8799
- type: nauc_ndcg_at_10_max
value: 26.0726
- type: nauc_ndcg_at_10_std
value: 1.5731
- type: nauc_ndcg_at_10_diff1
value: 38.9119
- type: nauc_ndcg_at_20_max
value: 28.139799999999997
- type: nauc_ndcg_at_20_std
value: 3.0962
- type: nauc_ndcg_at_20_diff1
value: 39.0918
- type: nauc_ndcg_at_100_max
value: 29.0945
- type: nauc_ndcg_at_100_std
value: 5.6239
- type: nauc_ndcg_at_100_diff1
value: 39.4526
- type: nauc_ndcg_at_1000_max
value: 28.7139
- type: nauc_ndcg_at_1000_std
value: 4.3576
- type: nauc_ndcg_at_1000_diff1
value: 40.1353
- type: nauc_map_at_1_max
value: 26.4001
- type: nauc_map_at_1_std
value: -2.4035
- type: nauc_map_at_1_diff1
value: 50.6355
- type: nauc_map_at_3_max
value: 27.6775
- type: nauc_map_at_3_std
value: -1.2323
- type: nauc_map_at_3_diff1
value: 45.1028
- type: nauc_map_at_5_max
value: 27.7501
- type: nauc_map_at_5_std
value: -1.0206
- type: nauc_map_at_5_diff1
value: 44.137100000000004
- type: nauc_map_at_10_max
value: 27.3169
- type: nauc_map_at_10_std
value: -0.6242
- type: nauc_map_at_10_diff1
value: 42.992799999999995
- type: nauc_map_at_20_max
value: 27.9088
- type: nauc_map_at_20_std
value: 0.369
- type: nauc_map_at_20_diff1
value: 42.7076
- type: nauc_map_at_100_max
value: 28.0018
- type: nauc_map_at_100_std
value: 1.0477999999999998
- type: nauc_map_at_100_diff1
value: 42.663000000000004
- type: nauc_map_at_1000_max
value: 27.8892
- type: nauc_map_at_1000_std
value: 1.0114
- type: nauc_map_at_1000_diff1
value: 42.6802
- type: nauc_recall_at_1_max
value: 26.4001
- type: nauc_recall_at_1_std
value: -2.4035
- type: nauc_recall_at_1_diff1
value: 50.6355
- type: nauc_recall_at_3_max
value: 26.4415
- type: nauc_recall_at_3_std
value: 0.6093000000000001
- type: nauc_recall_at_3_diff1
value: 38.3001
- type: nauc_recall_at_5_max
value: 25.5757
- type: nauc_recall_at_5_std
value: 1.7046999999999999
- type: nauc_recall_at_5_diff1
value: 33.9953
- type: nauc_recall_at_10_max
value: 21.9077
- type: nauc_recall_at_10_std
value: 2.4832
- type: nauc_recall_at_10_diff1
value: 27.6569
- type: nauc_recall_at_20_max
value: 27.9785
- type: nauc_recall_at_20_std
value: 8.717
- type: nauc_recall_at_20_diff1
value: 26.076
- type: nauc_recall_at_100_max
value: 32.8372
- type: nauc_recall_at_100_std
value: 28.644799999999996
- type: nauc_recall_at_100_diff1
value: 22.3344
- type: nauc_recall_at_1000_max
value: 43.087199999999996
- type: nauc_recall_at_1000_std
value: 38.6013
- type: nauc_recall_at_1000_diff1
value: 19.057399999999998
- type: nauc_precision_at_1_max
value: 27.961799999999997
- type: nauc_precision_at_1_std
value: 1.9207999999999998
- type: nauc_precision_at_1_diff1
value: 47.9837
- type: nauc_precision_at_3_max
value: 26.680999999999997
- type: nauc_precision_at_3_std
value: 6.4623
- type: nauc_precision_at_3_diff1
value: 26.0754
- type: nauc_precision_at_5_max
value: 23.0766
- type: nauc_precision_at_5_std
value: 8.0635
- type: nauc_precision_at_5_diff1
value: 18.249399999999998
- type: nauc_precision_at_10_max
value: 14.0187
- type: nauc_precision_at_10_std
value: 10.793999999999999
- type: nauc_precision_at_10_diff1
value: 5.7888
- type: nauc_precision_at_20_max
value: 12.065
- type: nauc_precision_at_20_std
value: 15.728800000000001
- type: nauc_precision_at_20_diff1
value: -0.7351
- type: nauc_precision_at_100_max
value: -0.4148
- type: nauc_precision_at_100_std
value: 17.0201
- type: nauc_precision_at_100_diff1
value: -8.088099999999999
- type: nauc_precision_at_1000_max
value: -18.342
- type: nauc_precision_at_1000_std
value: 5.6757
- type: nauc_precision_at_1000_diff1
value: -13.869200000000001
- type: nauc_mrr_at_1_max
value: 27.961799999999997
- type: nauc_mrr_at_1_std
value: 1.9207999999999998
- type: nauc_mrr_at_1_diff1
value: 47.9837
- type: nauc_mrr_at_3_max
value: 27.7754
- type: nauc_mrr_at_3_std
value: 2.2727
- type: nauc_mrr_at_3_diff1
value: 42.864999999999995
- type: nauc_mrr_at_5_max
value: 27.7453
- type: nauc_mrr_at_5_std
value: 2.7718
- type: nauc_mrr_at_5_diff1
value: 41.9633
- type: nauc_mrr_at_10_max
value: 27.308300000000003
- type: nauc_mrr_at_10_std
value: 3.089
- type: nauc_mrr_at_10_diff1
value: 41.3641
- type: nauc_mrr_at_20_max
value: 27.814299999999996
- type: nauc_mrr_at_20_std
value: 3.2985
- type: nauc_mrr_at_20_diff1
value: 41.6228
- type: nauc_mrr_at_100_max
value: 27.8378
- type: nauc_mrr_at_100_std
value: 3.517
- type: nauc_mrr_at_100_diff1
value: 41.7328
- type: nauc_mrr_at_1000_max
value: 27.8277
- type: nauc_mrr_at_1000_std
value: 3.4743000000000004
- type: nauc_mrr_at_1000_diff1
value: 41.7584
- type: main_score
value: 42.321999999999996
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackWordpressRetrieval (default)
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
split: test
type: mteb/cqadupstack-wordpress
metrics:
- type: ndcg_at_1
value: 23.105
- type: ndcg_at_3
value: 28.781000000000002
- type: ndcg_at_5
value: 31.338
- type: ndcg_at_10
value: 34.091
- type: ndcg_at_20
value: 36.046
- type: ndcg_at_100
value: 39.556999999999995
- type: ndcg_at_1000
value: 41.647
- type: map_at_1
value: 21.448
- type: map_at_3
value: 26.527
- type: map_at_5
value: 28.02
- type: map_at_10
value: 29.204
- type: map_at_20
value: 29.774
- type: map_at_100
value: 30.278
- type: map_at_1000
value: 30.364
- type: recall_at_1
value: 21.448
- type: recall_at_3
value: 33.167
- type: recall_at_5
value: 39.156
- type: recall_at_10
value: 47.277
- type: recall_at_20
value: 54.639
- type: recall_at_100
value: 72.809
- type: recall_at_1000
value: 88.099
- type: precision_at_1
value: 23.105
- type: precision_at_3
value: 12.138
- type: precision_at_5
value: 8.834999999999999
- type: precision_at_10
value: 5.434
- type: precision_at_20
value: 3.189
- type: precision_at_100
value: 0.8710000000000001
- type: precision_at_1000
value: 0.11499999999999999
- type: mrr_at_1
value: 23.1054
- type: mrr_at_3
value: 28.5582
- type: mrr_at_5
value: 30.0462
- type: mrr_at_10
value: 31.1854
- type: mrr_at_20
value: 31.6775
- type: mrr_at_100
value: 32.1183
- type: mrr_at_1000
value: 32.1723
- type: nauc_ndcg_at_1_max
value: 30.894
- type: nauc_ndcg_at_1_std
value: 0.8228
- type: nauc_ndcg_at_1_diff1
value: 50.571600000000004
- type: nauc_ndcg_at_3_max
value: 24.9603
- type: nauc_ndcg_at_3_std
value: -0.3032
- type: nauc_ndcg_at_3_diff1
value: 43.803799999999995
- type: nauc_ndcg_at_5_max
value: 26.1479
- type: nauc_ndcg_at_5_std
value: 0.3038
- type: nauc_ndcg_at_5_diff1
value: 42.5296
- type: nauc_ndcg_at_10_max
value: 26.0992
- type: nauc_ndcg_at_10_std
value: 1.2644
- type: nauc_ndcg_at_10_diff1
value: 41.943000000000005
- type: nauc_ndcg_at_20_max
value: 26.132300000000004
- type: nauc_ndcg_at_20_std
value: 1.798
- type: nauc_ndcg_at_20_diff1
value: 41.1586
- type: nauc_ndcg_at_100_max
value: 26.4048
- type: nauc_ndcg_at_100_std
value: 3.7023
- type: nauc_ndcg_at_100_diff1
value: 41.3297
- type: nauc_ndcg_at_1000_max
value: 26.889200000000002
- type: nauc_ndcg_at_1000_std
value: 3.7087000000000003
- type: nauc_ndcg_at_1000_diff1
value: 41.716300000000004
- type: nauc_map_at_1_max
value: 27.5981
- type: nauc_map_at_1_std
value: 0.387
- type: nauc_map_at_1_diff1
value: 48.6362
- type: nauc_map_at_3_max
value: 24.8521
- type: nauc_map_at_3_std
value: -0.414
- type: nauc_map_at_3_diff1
value: 44.766600000000004
- type: nauc_map_at_5_max
value: 25.937900000000003
- type: nauc_map_at_5_std
value: -0.054900000000000004
- type: nauc_map_at_5_diff1
value: 44.0302
- type: nauc_map_at_10_max
value: 26.018
- type: nauc_map_at_10_std
value: 0.3584
- type: nauc_map_at_10_diff1
value: 43.7009
- type: nauc_map_at_20_max
value: 26.0129
- type: nauc_map_at_20_std
value: 0.5091
- type: nauc_map_at_20_diff1
value: 43.4823
- type: nauc_map_at_100_max
value: 26.1059
- type: nauc_map_at_100_std
value: 0.7867999999999999
- type: nauc_map_at_100_diff1
value: 43.4867
- type: nauc_map_at_1000_max
value: 26.131500000000003
- type: nauc_map_at_1000_std
value: 0.8026
- type: nauc_map_at_1000_diff1
value: 43.5097
- type: nauc_recall_at_1_max
value: 27.5981
- type: nauc_recall_at_1_std
value: 0.387
- type: nauc_recall_at_1_diff1
value: 48.6362
- type: nauc_recall_at_3_max
value: 21.7315
- type: nauc_recall_at_3_std
value: -1.0671
- type: nauc_recall_at_3_diff1
value: 39.4999
- type: nauc_recall_at_5_max
value: 23.994699999999998
- type: nauc_recall_at_5_std
value: 0.0779
- type: nauc_recall_at_5_diff1
value: 36.9505
- type: nauc_recall_at_10_max
value: 23.2468
- type: nauc_recall_at_10_std
value: 2.654
- type: nauc_recall_at_10_diff1
value: 35.158899999999996
- type: nauc_recall_at_20_max
value: 23.28
- type: nauc_recall_at_20_std
value: 4.8041
- type: nauc_recall_at_20_diff1
value: 31.547399999999996
- type: nauc_recall_at_100_max
value: 21.7186
- type: nauc_recall_at_100_std
value: 17.083000000000002
- type: nauc_recall_at_100_diff1
value: 29.229899999999997
- type: nauc_recall_at_1000_max
value: 28.9168
- type: nauc_recall_at_1000_std
value: 29.9591
- type: nauc_recall_at_1000_diff1
value: 27.0436
- type: nauc_precision_at_1_max
value: 30.894
- type: nauc_precision_at_1_std
value: 0.8228
- type: nauc_precision_at_1_diff1
value: 50.571600000000004
- type: nauc_precision_at_3_max
value: 25.076999999999998
- type: nauc_precision_at_3_std
value: 0.39890000000000003
- type: nauc_precision_at_3_diff1
value: 40.618300000000005
- type: nauc_precision_at_5_max
value: 29.274299999999997
- type: nauc_precision_at_5_std
value: 3.02
- type: nauc_precision_at_5_diff1
value: 35.3233
- type: nauc_precision_at_10_max
value: 28.1411
- type: nauc_precision_at_10_std
value: 6.628100000000001
- type: nauc_precision_at_10_diff1
value: 30.949700000000004
- type: nauc_precision_at_20_max
value: 25.974999999999998
- type: nauc_precision_at_20_std
value: 8.3134
- type: nauc_precision_at_20_diff1
value: 25.324799999999996
- type: nauc_precision_at_100_max
value: 22.682
- type: nauc_precision_at_100_std
value: 20.4648
- type: nauc_precision_at_100_diff1
value: 13.2139
- type: nauc_precision_at_1000_max
value: 2.8796
- type: nauc_precision_at_1000_std
value: 10.6158
- type: nauc_precision_at_1000_diff1
value: -11.8614
- type: nauc_mrr_at_1_max
value: 30.894
- type: nauc_mrr_at_1_std
value: 0.8228
- type: nauc_mrr_at_1_diff1
value: 50.571600000000004
- type: nauc_mrr_at_3_max
value: 27.8993
- type: nauc_mrr_at_3_std
value: 0.5541
- type: nauc_mrr_at_3_diff1
value: 46.307900000000004
- type: nauc_mrr_at_5_max
value: 28.4404
- type: nauc_mrr_at_5_std
value: 0.8992
- type: nauc_mrr_at_5_diff1
value: 45.405699999999996
- type: nauc_mrr_at_10_max
value: 28.492099999999997
- type: nauc_mrr_at_10_std
value: 1.3769
- type: nauc_mrr_at_10_diff1
value: 45.163
- type: nauc_mrr_at_20_max
value: 28.4509
- type: nauc_mrr_at_20_std
value: 1.4745
- type: nauc_mrr_at_20_diff1
value: 44.9459
- type: nauc_mrr_at_100_max
value: 28.533199999999997
- type: nauc_mrr_at_100_std
value: 1.7016
- type: nauc_mrr_at_100_diff1
value: 45.0053
- type: nauc_mrr_at_1000_max
value: 28.5364
- type: nauc_mrr_at_1000_std
value: 1.6894
- type: nauc_mrr_at_1000_diff1
value: 45.0407
- type: main_score
value: 34.091
task:
type: Retrieval
- dataset:
config: default
name: MTEB ClimateFEVER (default)
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
split: test
type: mteb/climate-fever
metrics:
- type: ndcg_at_1
value: 32.964
- type: ndcg_at_3
value: 28.116000000000003
- type: ndcg_at_5
value: 29.932
- type: ndcg_at_10
value: 33.207
- type: ndcg_at_20
value: 35.730000000000004
- type: ndcg_at_100
value: 40.251999999999995
- type: ndcg_at_1000
value: 43.463
- type: map_at_1
value: 14.846
- type: map_at_3
value: 20.683
- type: map_at_5
value: 22.753999999999998
- type: map_at_10
value: 24.413
- type: map_at_20
value: 25.355
- type: map_at_100
value: 26.243
- type: map_at_1000
value: 26.43
- type: recall_at_1
value: 14.846
- type: recall_at_3
value: 25.368000000000002
- type: recall_at_5
value: 31.159
- type: recall_at_10
value: 38.391
- type: recall_at_20
value: 45.366
- type: recall_at_100
value: 62.597
- type: recall_at_1000
value: 80.448
- type: precision_at_1
value: 32.964
- type: precision_at_3
value: 20.782
- type: precision_at_5
value: 15.595999999999998
- type: precision_at_10
value: 9.98
- type: precision_at_20
value: 6.091
- type: precision_at_100
value: 1.7760000000000002
- type: precision_at_1000
value: 0.23700000000000002
- type: mrr_at_1
value: 32.9642
- type: mrr_at_3
value: 41.9001
- type: mrr_at_5
value: 43.4701
- type: mrr_at_10
value: 44.6392
- type: mrr_at_20
value: 45.129999999999995
- type: mrr_at_100
value: 45.4343
- type: mrr_at_1000
value: 45.4726
- type: nauc_ndcg_at_1_max
value: 31.2733
- type: nauc_ndcg_at_1_std
value: 17.8778
- type: nauc_ndcg_at_1_diff1
value: 30.7939
- type: nauc_ndcg_at_3_max
value: 35.7233
- type: nauc_ndcg_at_3_std
value: 20.499200000000002
- type: nauc_ndcg_at_3_diff1
value: 26.6175
- type: nauc_ndcg_at_5_max
value: 36.5593
- type: nauc_ndcg_at_5_std
value: 20.5487
- type: nauc_ndcg_at_5_diff1
value: 24.8006
- type: nauc_ndcg_at_10_max
value: 38.1663
- type: nauc_ndcg_at_10_std
value: 23.8688
- type: nauc_ndcg_at_10_diff1
value: 23.7262
- type: nauc_ndcg_at_20_max
value: 38.719
- type: nauc_ndcg_at_20_std
value: 26.4556
- type: nauc_ndcg_at_20_diff1
value: 22.7078
- type: nauc_ndcg_at_100_max
value: 40.396100000000004
- type: nauc_ndcg_at_100_std
value: 29.325200000000002
- type: nauc_ndcg_at_100_diff1
value: 22.7562
- type: nauc_ndcg_at_1000_max
value: 40.4082
- type: nauc_ndcg_at_1000_std
value: 29.595
- type: nauc_ndcg_at_1000_diff1
value: 22.8439
- type: nauc_map_at_1_max
value: 33.0891
- type: nauc_map_at_1_std
value: 13.3677
- type: nauc_map_at_1_diff1
value: 34.1515
- type: nauc_map_at_3_max
value: 35.384
- type: nauc_map_at_3_std
value: 17.637
- type: nauc_map_at_3_diff1
value: 28.4007
- type: nauc_map_at_5_max
value: 36.0659
- type: nauc_map_at_5_std
value: 18.5628
- type: nauc_map_at_5_diff1
value: 26.5464
- type: nauc_map_at_10_max
value: 37.2578
- type: nauc_map_at_10_std
value: 20.617
- type: nauc_map_at_10_diff1
value: 25.926199999999998
- type: nauc_map_at_20_max
value: 37.500299999999996
- type: nauc_map_at_20_std
value: 21.851300000000002
- type: nauc_map_at_20_diff1
value: 25.3292
- type: nauc_map_at_100_max
value: 37.933299999999996
- type: nauc_map_at_100_std
value: 22.6615
- type: nauc_map_at_100_diff1
value: 25.259500000000003
- type: nauc_map_at_1000_max
value: 37.9165
- type: nauc_map_at_1000_std
value: 22.7028
- type: nauc_map_at_1000_diff1
value: 25.239299999999997
- type: nauc_recall_at_1_max
value: 33.0891
- type: nauc_recall_at_1_std
value: 13.3677
- type: nauc_recall_at_1_diff1
value: 34.1515
- type: nauc_recall_at_3_max
value: 35.282000000000004
- type: nauc_recall_at_3_std
value: 18.8367
- type: nauc_recall_at_3_diff1
value: 24.2501
- type: nauc_recall_at_5_max
value: 34.3122
- type: nauc_recall_at_5_std
value: 18.5093
- type: nauc_recall_at_5_diff1
value: 18.8749
- type: nauc_recall_at_10_max
value: 36.2395
- type: nauc_recall_at_10_std
value: 24.2952
- type: nauc_recall_at_10_diff1
value: 16.3158
- type: nauc_recall_at_20_max
value: 35.6255
- type: nauc_recall_at_20_std
value: 29.56
- type: nauc_recall_at_20_diff1
value: 12.856699999999998
- type: nauc_recall_at_100_max
value: 39.016600000000004
- type: nauc_recall_at_100_std
value: 37.9984
- type: nauc_recall_at_100_diff1
value: 10.807
- type: nauc_recall_at_1000_max
value: 42.7582
- type: nauc_recall_at_1000_std
value: 46.9593
- type: nauc_recall_at_1000_diff1
value: 8.1464
- type: nauc_precision_at_1_max
value: 31.2733
- type: nauc_precision_at_1_std
value: 17.8778
- type: nauc_precision_at_1_diff1
value: 30.7939
- type: nauc_precision_at_3_max
value: 35.2819
- type: nauc_precision_at_3_std
value: 25.9018
- type: nauc_precision_at_3_diff1
value: 18.4633
- type: nauc_precision_at_5_max
value: 32.7525
- type: nauc_precision_at_5_std
value: 25.5596
- type: nauc_precision_at_5_diff1
value: 11.241
- type: nauc_precision_at_10_max
value: 32.4574
- type: nauc_precision_at_10_std
value: 31.1815
- type: nauc_precision_at_10_diff1
value: 6.3983
- type: nauc_precision_at_20_max
value: 29.522100000000002
- type: nauc_precision_at_20_std
value: 34.4644
- type: nauc_precision_at_20_diff1
value: 1.9328
- type: nauc_precision_at_100_max
value: 25.594299999999997
- type: nauc_precision_at_100_std
value: 36.7783
- type: nauc_precision_at_100_diff1
value: -1.9514
- type: nauc_precision_at_1000_max
value: 14.3931
- type: nauc_precision_at_1000_std
value: 28.8585
- type: nauc_precision_at_1000_diff1
value: -7.264600000000001
- type: nauc_mrr_at_1_max
value: 31.2733
- type: nauc_mrr_at_1_std
value: 17.8778
- type: nauc_mrr_at_1_diff1
value: 30.7939
- type: nauc_mrr_at_3_max
value: 34.4613
- type: nauc_mrr_at_3_std
value: 21.529
- type: nauc_mrr_at_3_diff1
value: 27.369
- type: nauc_mrr_at_5_max
value: 34.5965
- type: nauc_mrr_at_5_std
value: 21.7303
- type: nauc_mrr_at_5_diff1
value: 26.521800000000002
- type: nauc_mrr_at_10_max
value: 34.6792
- type: nauc_mrr_at_10_std
value: 22.4157
- type: nauc_mrr_at_10_diff1
value: 26.2542
- type: nauc_mrr_at_20_max
value: 34.746
- type: nauc_mrr_at_20_std
value: 22.586000000000002
- type: nauc_mrr_at_20_diff1
value: 26.305600000000002
- type: nauc_mrr_at_100_max
value: 34.7901
- type: nauc_mrr_at_100_std
value: 22.5625
- type: nauc_mrr_at_100_diff1
value: 26.429599999999997
- type: nauc_mrr_at_1000_max
value: 34.779700000000005
- type: nauc_mrr_at_1000_std
value: 22.5434
- type: nauc_mrr_at_1000_diff1
value: 26.437300000000004
- type: main_score
value: 33.207
task:
type: Retrieval
- dataset:
config: default
name: MTEB DBPedia (default)
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
split: test
type: mteb/dbpedia
metrics:
- type: ndcg_at_1
value: 51.87500000000001
- type: ndcg_at_3
value: 42.552
- type: ndcg_at_5
value: 39.946
- type: ndcg_at_10
value: 37.897999999999996
- type: ndcg_at_20
value: 37.153000000000006
- type: ndcg_at_100
value: 42.012
- type: ndcg_at_1000
value: 49.202
- type: map_at_1
value: 7.869
- type: map_at_3
value: 12.307
- type: map_at_5
value: 14.394000000000002
- type: map_at_10
value: 17.175
- type: map_at_20
value: 19.689
- type: map_at_100
value: 23.857999999999997
- type: map_at_1000
value: 25.417
- type: recall_at_1
value: 7.869
- type: recall_at_3
value: 13.566
- type: recall_at_5
value: 17.403
- type: recall_at_10
value: 22.811999999999998
- type: recall_at_20
value: 29.378999999999998
- type: recall_at_100
value: 48.353
- type: recall_at_1000
value: 70.801
- type: precision_at_1
value: 62.5
- type: precision_at_3
value: 45.417
- type: precision_at_5
value: 38.15
- type: precision_at_10
value: 29.95
- type: precision_at_20
value: 22.462
- type: precision_at_100
value: 9.703000000000001
- type: precision_at_1000
value: 2.027
- type: mrr_at_1
value: 62.5
- type: mrr_at_3
value: 68.625
- type: mrr_at_5
value: 70.0625
- type: mrr_at_10
value: 70.60549999999999
- type: mrr_at_20
value: 70.934
- type: mrr_at_100
value: 71.0742
- type: mrr_at_1000
value: 71.0797
- type: nauc_ndcg_at_1_max
value: 41.436499999999995
- type: nauc_ndcg_at_1_std
value: 26.6537
- type: nauc_ndcg_at_1_diff1
value: 41.362500000000004
- type: nauc_ndcg_at_3_max
value: 38.2075
- type: nauc_ndcg_at_3_std
value: 28.1899
- type: nauc_ndcg_at_3_diff1
value: 29.353299999999997
- type: nauc_ndcg_at_5_max
value: 36.592
- type: nauc_ndcg_at_5_std
value: 27.9763
- type: nauc_ndcg_at_5_diff1
value: 30.2168
- type: nauc_ndcg_at_10_max
value: 36.2032
- type: nauc_ndcg_at_10_std
value: 26.7501
- type: nauc_ndcg_at_10_diff1
value: 33.409499999999994
- type: nauc_ndcg_at_20_max
value: 33.981
- type: nauc_ndcg_at_20_std
value: 25.5934
- type: nauc_ndcg_at_20_diff1
value: 33.3985
- type: nauc_ndcg_at_100_max
value: 36.448
- type: nauc_ndcg_at_100_std
value: 32.3459
- type: nauc_ndcg_at_100_diff1
value: 33.2002
- type: nauc_ndcg_at_1000_max
value: 40.2408
- type: nauc_ndcg_at_1000_std
value: 38.6683
- type: nauc_ndcg_at_1000_diff1
value: 31.9563
- type: nauc_map_at_1_max
value: 8.8384
- type: nauc_map_at_1_std
value: -12.18
- type: nauc_map_at_1_diff1
value: 42.5949
- type: nauc_map_at_3_max
value: 10.4264
- type: nauc_map_at_3_std
value: -6.4437
- type: nauc_map_at_3_diff1
value: 31.555
- type: nauc_map_at_5_max
value: 12.4445
- type: nauc_map_at_5_std
value: -3.5782000000000003
- type: nauc_map_at_5_diff1
value: 29.8594
- type: nauc_map_at_10_max
value: 16.9699
- type: nauc_map_at_10_std
value: 2.0362999999999998
- type: nauc_map_at_10_diff1
value: 29.737599999999997
- type: nauc_map_at_20_max
value: 21.4809
- type: nauc_map_at_20_std
value: 9.0494
- type: nauc_map_at_20_diff1
value: 30.0806
- type: nauc_map_at_100_max
value: 29.0583
- type: nauc_map_at_100_std
value: 22.3292
- type: nauc_map_at_100_diff1
value: 29.9971
- type: nauc_map_at_1000_max
value: 30.4654
- type: nauc_map_at_1000_std
value: 25.208799999999997
- type: nauc_map_at_1000_diff1
value: 29.3623
- type: nauc_recall_at_1_max
value: 8.8384
- type: nauc_recall_at_1_std
value: -12.18
- type: nauc_recall_at_1_diff1
value: 42.5949
- type: nauc_recall_at_3_max
value: 7.692400000000001
- type: nauc_recall_at_3_std
value: -7.5964
- type: nauc_recall_at_3_diff1
value: 27.5878
- type: nauc_recall_at_5_max
value: 7.3506
- type: nauc_recall_at_5_std
value: -7.152799999999999
- type: nauc_recall_at_5_diff1
value: 25.565199999999997
- type: nauc_recall_at_10_max
value: 13.009
- type: nauc_recall_at_10_std
value: -0.6829
- type: nauc_recall_at_10_diff1
value: 25.8442
- type: nauc_recall_at_20_max
value: 15.329
- type: nauc_recall_at_20_std
value: 5.9502
- type: nauc_recall_at_20_diff1
value: 24.584400000000002
- type: nauc_recall_at_100_max
value: 26.1527
- type: nauc_recall_at_100_std
value: 28.8597
- type: nauc_recall_at_100_diff1
value: 23.5886
- type: nauc_recall_at_1000_max
value: 32.736
- type: nauc_recall_at_1000_std
value: 41.5612
- type: nauc_recall_at_1000_diff1
value: 21.8267
- type: nauc_precision_at_1_max
value: 56.4401
- type: nauc_precision_at_1_std
value: 39.5242
- type: nauc_precision_at_1_diff1
value: 44.307
- type: nauc_precision_at_3_max
value: 44.521100000000004
- type: nauc_precision_at_3_std
value: 42.4366
- type: nauc_precision_at_3_diff1
value: 13.569899999999999
- type: nauc_precision_at_5_max
value: 42.3594
- type: nauc_precision_at_5_std
value: 44.4758
- type: nauc_precision_at_5_diff1
value: 10.2733
- type: nauc_precision_at_10_max
value: 41.260000000000005
- type: nauc_precision_at_10_std
value: 47.2496
- type: nauc_precision_at_10_diff1
value: 9.393799999999999
- type: nauc_precision_at_20_max
value: 39.8169
- type: nauc_precision_at_20_std
value: 49.8068
- type: nauc_precision_at_20_diff1
value: 8.7204
- type: nauc_precision_at_100_max
value: 30.9015
- type: nauc_precision_at_100_std
value: 46.853899999999996
- type: nauc_precision_at_100_diff1
value: 2.0425
- type: nauc_precision_at_1000_max
value: 5.3395
- type: nauc_precision_at_1000_std
value: 17.8995
- type: nauc_precision_at_1000_diff1
value: -13.3583
- type: nauc_mrr_at_1_max
value: 56.4401
- type: nauc_mrr_at_1_std
value: 39.5242
- type: nauc_mrr_at_1_diff1
value: 44.307
- type: nauc_mrr_at_3_max
value: 56.97990000000001
- type: nauc_mrr_at_3_std
value: 42.138
- type: nauc_mrr_at_3_diff1
value: 41.5078
- type: nauc_mrr_at_5_max
value: 56.234399999999994
- type: nauc_mrr_at_5_std
value: 41.3617
- type: nauc_mrr_at_5_diff1
value: 41.227599999999995
- type: nauc_mrr_at_10_max
value: 56.6701
- type: nauc_mrr_at_10_std
value: 41.6424
- type: nauc_mrr_at_10_diff1
value: 41.814800000000005
- type: nauc_mrr_at_20_max
value: 56.6094
- type: nauc_mrr_at_20_std
value: 41.7269
- type: nauc_mrr_at_20_diff1
value: 41.8099
- type: nauc_mrr_at_100_max
value: 56.623900000000006
- type: nauc_mrr_at_100_std
value: 41.6436
- type: nauc_mrr_at_100_diff1
value: 41.7734
- type: nauc_mrr_at_1000_max
value: 56.6269
- type: nauc_mrr_at_1000_std
value: 41.6455
- type: nauc_mrr_at_1000_diff1
value: 41.7701
- type: main_score
value: 37.897999999999996
task:
type: Retrieval
- dataset:
config: default
name: MTEB EmotionClassification (default)
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
split: test
type: mteb/emotion
metrics:
- type: accuracy
value: 63.235
- type: f1
value: 59.071799999999996
- type: f1_weighted
value: 64.6776
- type: main_score
value: 63.235
task:
type: Classification
- dataset:
config: default
name: MTEB FEVER (default)
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
split: test
type: mteb/fever
metrics:
- type: ndcg_at_1
value: 83.498
- type: ndcg_at_3
value: 86.69200000000001
- type: ndcg_at_5
value: 87.787
- type: ndcg_at_10
value: 88.31
- type: ndcg_at_20
value: 88.595
- type: ndcg_at_100
value: 88.905
- type: ndcg_at_1000
value: 89.09700000000001
- type: map_at_1
value: 77.41
- type: map_at_3
value: 83.673
- type: map_at_5
value: 84.464
- type: map_at_10
value: 84.748
- type: map_at_20
value: 84.863
- type: map_at_100
value: 84.929
- type: map_at_1000
value: 84.941
- type: recall_at_1
value: 77.41
- type: recall_at_3
value: 90.027
- type: recall_at_5
value: 92.804
- type: recall_at_10
value: 94.377
- type: recall_at_20
value: 95.321
- type: recall_at_100
value: 96.646
- type: recall_at_1000
value: 97.77900000000001
- type: precision_at_1
value: 83.498
- type: precision_at_3
value: 32.728
- type: precision_at_5
value: 20.375
- type: precision_at_10
value: 10.424999999999999
- type: precision_at_20
value: 5.305
- type: precision_at_100
value: 1.0919999999999999
- type: precision_at_1000
value: 0.11299999999999999
- type: mrr_at_1
value: 83.4983
- type: mrr_at_3
value: 89.1189
- type: mrr_at_5
value: 89.6395
- type: mrr_at_10
value: 89.79899999999999
- type: mrr_at_20
value: 89.8266
- type: mrr_at_100
value: 89.8373
- type: mrr_at_1000
value: 89.8376
- type: nauc_ndcg_at_1_max
value: 31.5238
- type: nauc_ndcg_at_1_std
value: -2.2584
- type: nauc_ndcg_at_1_diff1
value: 74.5023
- type: nauc_ndcg_at_3_max
value: 24.1127
- type: nauc_ndcg_at_3_std
value: -2.6446
- type: nauc_ndcg_at_3_diff1
value: 49.2508
- type: nauc_ndcg_at_5_max
value: 23.6616
- type: nauc_ndcg_at_5_std
value: -1.3849
- type: nauc_ndcg_at_5_diff1
value: 47.106300000000005
- type: nauc_ndcg_at_10_max
value: 24.0605
- type: nauc_ndcg_at_10_std
value: -0.4336
- type: nauc_ndcg_at_10_diff1
value: 46.9328
- type: nauc_ndcg_at_20_max
value: 24.7393
- type: nauc_ndcg_at_20_std
value: 0.2855
- type: nauc_ndcg_at_20_diff1
value: 47.6414
- type: nauc_ndcg_at_100_max
value: 25.228099999999998
- type: nauc_ndcg_at_100_std
value: 0.5433
- type: nauc_ndcg_at_100_diff1
value: 48.7128
- type: nauc_ndcg_at_1000_max
value: 25.7762
- type: nauc_ndcg_at_1000_std
value: 0.7018
- type: nauc_ndcg_at_1000_diff1
value: 49.6639
- type: nauc_map_at_1_max
value: 22.7408
- type: nauc_map_at_1_std
value: -1.3189
- type: nauc_map_at_1_diff1
value: 54.049400000000006
- type: nauc_map_at_3_max
value: 22.6962
- type: nauc_map_at_3_std
value: -1.9411
- type: nauc_map_at_3_diff1
value: 47.3787
- type: nauc_map_at_5_max
value: 22.8472
- type: nauc_map_at_5_std
value: -1.2210999999999999
- type: nauc_map_at_5_diff1
value: 46.8099
- type: nauc_map_at_10_max
value: 23.1253
- type: nauc_map_at_10_std
value: -0.8166
- type: nauc_map_at_10_diff1
value: 46.961000000000006
- type: nauc_map_at_20_max
value: 23.336299999999998
- type: nauc_map_at_20_std
value: -0.6204000000000001
- type: nauc_map_at_20_diff1
value: 47.2216
- type: nauc_map_at_100_max
value: 23.4294
- type: nauc_map_at_100_std
value: -0.5717
- type: nauc_map_at_100_diff1
value: 47.3991
- type: nauc_map_at_1000_max
value: 23.4583
- type: nauc_map_at_1000_std
value: -0.5559999999999999
- type: nauc_map_at_1000_diff1
value: 47.4426
- type: nauc_recall_at_1_max
value: 22.7408
- type: nauc_recall_at_1_std
value: -1.3189
- type: nauc_recall_at_1_diff1
value: 54.049400000000006
- type: nauc_recall_at_3_max
value: 17.4806
- type: nauc_recall_at_3_std
value: -3.1338
- type: nauc_recall_at_3_diff1
value: 26.4903
- type: nauc_recall_at_5_max
value: 13.660400000000001
- type: nauc_recall_at_5_std
value: 1.3013000000000001
- type: nauc_recall_at_5_diff1
value: 12.3123
- type: nauc_recall_at_10_max
value: 13.4502
- type: nauc_recall_at_10_std
value: 7.7186
- type: nauc_recall_at_10_diff1
value: 2.9850000000000003
- type: nauc_recall_at_20_max
value: 16.927400000000002
- type: nauc_recall_at_20_std
value: 15.0728
- type: nauc_recall_at_20_diff1
value: 0.3826
- type: nauc_recall_at_100_max
value: 19.942899999999998
- type: nauc_recall_at_100_std
value: 23.5429
- type: nauc_recall_at_100_diff1
value: -3.4923
- type: nauc_recall_at_1000_max
value: 31.8901
- type: nauc_recall_at_1000_std
value: 37.6917
- type: nauc_recall_at_1000_diff1
value: -3.8215
- type: nauc_precision_at_1_max
value: 31.5238
- type: nauc_precision_at_1_std
value: -2.2584
- type: nauc_precision_at_1_diff1
value: 74.5023
- type: nauc_precision_at_3_max
value: 21.2432
- type: nauc_precision_at_3_std
value: -4.3431
- type: nauc_precision_at_3_diff1
value: 27.9237
- type: nauc_precision_at_5_max
value: 12.6046
- type: nauc_precision_at_5_std
value: 1.9817
- type: nauc_precision_at_5_diff1
value: 4.920100000000001
- type: nauc_precision_at_10_max
value: 11.452900000000001
- type: nauc_precision_at_10_std
value: 7.691199999999999
- type: nauc_precision_at_10_diff1
value: -2.363
- type: nauc_precision_at_20_max
value: 10.7846
- type: nauc_precision_at_20_std
value: 9.517100000000001
- type: nauc_precision_at_20_diff1
value: -3.3125
- type: nauc_precision_at_100_max
value: 9.1886
- type: nauc_precision_at_100_std
value: 9.5228
- type: nauc_precision_at_100_diff1
value: -1.9271
- type: nauc_precision_at_1000_max
value: 8.9731
- type: nauc_precision_at_1000_std
value: 8.952200000000001
- type: nauc_precision_at_1000_diff1
value: 1.226
- type: nauc_mrr_at_1_max
value: 31.5238
- type: nauc_mrr_at_1_std
value: -2.2584
- type: nauc_mrr_at_1_diff1
value: 74.5023
- type: nauc_mrr_at_3_max
value: 32.1889
- type: nauc_mrr_at_3_std
value: -4.9427
- type: nauc_mrr_at_3_diff1
value: 72.74080000000001
- type: nauc_mrr_at_5_max
value: 32.0768
- type: nauc_mrr_at_5_std
value: -4.4333
- type: nauc_mrr_at_5_diff1
value: 72.8939
- type: nauc_mrr_at_10_max
value: 32.1312
- type: nauc_mrr_at_10_std
value: -4.1756
- type: nauc_mrr_at_10_diff1
value: 73.0284
- type: nauc_mrr_at_20_max
value: 32.163199999999996
- type: nauc_mrr_at_20_std
value: -4.0634999999999994
- type: nauc_mrr_at_20_diff1
value: 73.0685
- type: nauc_mrr_at_100_max
value: 32.118
- type: nauc_mrr_at_100_std
value: -4.0852
- type: nauc_mrr_at_100_diff1
value: 73.0722
- type: nauc_mrr_at_1000_max
value: 32.1164
- type: nauc_mrr_at_1000_std
value: -4.0867
- type: nauc_mrr_at_1000_diff1
value: 73.0722
- type: main_score
value: 88.31
task:
type: Retrieval
- dataset:
config: default
name: MTEB FiQA2018 (default)
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
split: test
type: mteb/fiqa
metrics:
- type: ndcg_at_1
value: 48.302
- type: ndcg_at_3
value: 44.882
- type: ndcg_at_5
value: 45.898
- type: ndcg_at_10
value: 48.28
- type: ndcg_at_20
value: 51.536
- type: ndcg_at_100
value: 55.461000000000006
- type: ndcg_at_1000
value: 57.938
- type: map_at_1
value: 24.324
- type: map_at_3
value: 35.225
- type: map_at_5
value: 37.962
- type: map_at_10
value: 40.054
- type: map_at_20
value: 41.399
- type: map_at_100
value: 42.321
- type: map_at_1000
value: 42.476
- type: recall_at_1
value: 24.324
- type: recall_at_3
value: 41.036
- type: recall_at_5
value: 46.844
- type: recall_at_10
value: 54.75
- type: recall_at_20
value: 64.86800000000001
- type: recall_at_100
value: 80.413
- type: recall_at_1000
value: 95.242
- type: precision_at_1
value: 48.302
- type: precision_at_3
value: 29.835
- type: precision_at_5
value: 21.852
- type: precision_at_10
value: 13.333
- type: precision_at_20
value: 8.017000000000001
- type: precision_at_100
value: 2.068
- type: precision_at_1000
value: 0.25
- type: mrr_at_1
value: 48.302499999999995
- type: mrr_at_3
value: 55.0669
- type: mrr_at_5
value: 56.208800000000004
- type: mrr_at_10
value: 57.128299999999996
- type: mrr_at_20
value: 57.6631
- type: mrr_at_100
value: 57.897
- type: mrr_at_1000
value: 57.9236
- type: nauc_ndcg_at_1_max
value: 35.3012
- type: nauc_ndcg_at_1_std
value: -10.4163
- type: nauc_ndcg_at_1_diff1
value: 49.8902
- type: nauc_ndcg_at_3_max
value: 33.3967
- type: nauc_ndcg_at_3_std
value: -6.623900000000001
- type: nauc_ndcg_at_3_diff1
value: 39.811600000000006
- type: nauc_ndcg_at_5_max
value: 32.1592
- type: nauc_ndcg_at_5_std
value: -7.155799999999999
- type: nauc_ndcg_at_5_diff1
value: 39.4895
- type: nauc_ndcg_at_10_max
value: 32.6943
- type: nauc_ndcg_at_10_std
value: -5.543
- type: nauc_ndcg_at_10_diff1
value: 39.4015
- type: nauc_ndcg_at_20_max
value: 33.247
- type: nauc_ndcg_at_20_std
value: -3.5911
- type: nauc_ndcg_at_20_diff1
value: 40.1093
- type: nauc_ndcg_at_100_max
value: 35.8738
- type: nauc_ndcg_at_100_std
value: -0.0625
- type: nauc_ndcg_at_100_diff1
value: 40.1993
- type: nauc_ndcg_at_1000_max
value: 36.105
- type: nauc_ndcg_at_1000_std
value: -1.2023000000000001
- type: nauc_ndcg_at_1000_diff1
value: 40.9404
- type: nauc_map_at_1_max
value: 15.893099999999999
- type: nauc_map_at_1_std
value: -10.817400000000001
- type: nauc_map_at_1_diff1
value: 42.2743
- type: nauc_map_at_3_max
value: 24.8811
- type: nauc_map_at_3_std
value: -8.8756
- type: nauc_map_at_3_diff1
value: 40.2234
- type: nauc_map_at_5_max
value: 28.198
- type: nauc_map_at_5_std
value: -8.2681
- type: nauc_map_at_5_diff1
value: 39.8233
- type: nauc_map_at_10_max
value: 29.8969
- type: nauc_map_at_10_std
value: -7.2732
- type: nauc_map_at_10_diff1
value: 39.056200000000004
- type: nauc_map_at_20_max
value: 30.438900000000004
- type: nauc_map_at_20_std
value: -6.2997
- type: nauc_map_at_20_diff1
value: 39.2282
- type: nauc_map_at_100_max
value: 31.2085
- type: nauc_map_at_100_std
value: -5.4389
- type: nauc_map_at_100_diff1
value: 39.2156
- type: nauc_map_at_1000_max
value: 31.2581
- type: nauc_map_at_1000_std
value: -5.4575
- type: nauc_map_at_1000_diff1
value: 39.256099999999996
- type: nauc_recall_at_1_max
value: 15.893099999999999
- type: nauc_recall_at_1_std
value: -10.817400000000001
- type: nauc_recall_at_1_diff1
value: 42.2743
- type: nauc_recall_at_3_max
value: 20.7605
- type: nauc_recall_at_3_std
value: -7.9595
- type: nauc_recall_at_3_diff1
value: 33.0679
- type: nauc_recall_at_5_max
value: 24.532899999999998
- type: nauc_recall_at_5_std
value: -7.535
- type: nauc_recall_at_5_diff1
value: 32.5104
- type: nauc_recall_at_10_max
value: 26.8851
- type: nauc_recall_at_10_std
value: -2.7628
- type: nauc_recall_at_10_diff1
value: 28.9325
- type: nauc_recall_at_20_max
value: 25.8328
- type: nauc_recall_at_20_std
value: 3.2887
- type: nauc_recall_at_20_diff1
value: 28.417399999999997
- type: nauc_recall_at_100_max
value: 36.079699999999995
- type: nauc_recall_at_100_std
value: 27.093099999999996
- type: nauc_recall_at_100_diff1
value: 26.377299999999998
- type: nauc_recall_at_1000_max
value: 47.7952
- type: nauc_recall_at_1000_std
value: 53.0751
- type: nauc_recall_at_1000_diff1
value: 32.7248
- type: nauc_precision_at_1_max
value: 35.3012
- type: nauc_precision_at_1_std
value: -10.4163
- type: nauc_precision_at_1_diff1
value: 49.8902
- type: nauc_precision_at_3_max
value: 39.9322
- type: nauc_precision_at_3_std
value: 0.2644
- type: nauc_precision_at_3_diff1
value: 26.600600000000004
- type: nauc_precision_at_5_max
value: 40.3902
- type: nauc_precision_at_5_std
value: 2.3505000000000003
- type: nauc_precision_at_5_diff1
value: 19.7771
- type: nauc_precision_at_10_max
value: 39.415299999999995
- type: nauc_precision_at_10_std
value: 6.5885
- type: nauc_precision_at_10_diff1
value: 13.7527
- type: nauc_precision_at_20_max
value: 37.2422
- type: nauc_precision_at_20_std
value: 12.9599
- type: nauc_precision_at_20_diff1
value: 9.6751
- type: nauc_precision_at_100_max
value: 35.6967
- type: nauc_precision_at_100_std
value: 19.8202
- type: nauc_precision_at_100_diff1
value: 1.6320999999999999
- type: nauc_precision_at_1000_max
value: 28.9716
- type: nauc_precision_at_1000_std
value: 15.8223
- type: nauc_precision_at_1000_diff1
value: -3.3576
- type: nauc_mrr_at_1_max
value: 35.3012
- type: nauc_mrr_at_1_std
value: -10.4163
- type: nauc_mrr_at_1_diff1
value: 49.8902
- type: nauc_mrr_at_3_max
value: 36.6979
- type: nauc_mrr_at_3_std
value: -7.6057
- type: nauc_mrr_at_3_diff1
value: 48.1421
- type: nauc_mrr_at_5_max
value: 37.0712
- type: nauc_mrr_at_5_std
value: -7.4076
- type: nauc_mrr_at_5_diff1
value: 47.7326
- type: nauc_mrr_at_10_max
value: 37.4375
- type: nauc_mrr_at_10_std
value: -6.875299999999999
- type: nauc_mrr_at_10_diff1
value: 47.7446
- type: nauc_mrr_at_20_max
value: 37.473
- type: nauc_mrr_at_20_std
value: -6.694799999999999
- type: nauc_mrr_at_20_diff1
value: 47.8238
- type: nauc_mrr_at_100_max
value: 37.453599999999994
- type: nauc_mrr_at_100_std
value: -6.612500000000001
- type: nauc_mrr_at_100_diff1
value: 47.8186
- type: nauc_mrr_at_1000_max
value: 37.4367
- type: nauc_mrr_at_1000_std
value: -6.6572000000000005
- type: nauc_mrr_at_1000_diff1
value: 47.8333
- type: main_score
value: 48.28
task:
type: Retrieval
- dataset:
config: default
name: MTEB HotpotQA (default)
revision: ab518f4d6fcca38d87c25209f94beba119d02014
split: test
type: mteb/hotpotqa
metrics:
- type: ndcg_at_1
value: 82.836
- type: ndcg_at_3
value: 60.80799999999999
- type: ndcg_at_5
value: 62.719
- type: ndcg_at_10
value: 64.464
- type: ndcg_at_20
value: 65.613
- type: ndcg_at_100
value: 67.244
- type: ndcg_at_1000
value: 68.633
- type: map_at_1
value: 41.418
- type: map_at_3
value: 51.913
- type: map_at_5
value: 53.45100000000001
- type: map_at_10
value: 54.50899999999999
- type: map_at_20
value: 54.981
- type: map_at_100
value: 55.315000000000005
- type: map_at_1000
value: 55.387
- type: recall_at_1
value: 41.418
- type: recall_at_3
value: 55.206
- type: recall_at_5
value: 58.987
- type: recall_at_10
value: 63.369
- type: recall_at_20
value: 67.07
- type: recall_at_100
value: 74.29400000000001
- type: recall_at_1000
value: 83.504
- type: precision_at_1
value: 82.836
- type: precision_at_3
value: 36.803999999999995
- type: precision_at_5
value: 23.595
- type: precision_at_10
value: 12.674
- type: precision_at_20
value: 6.707000000000001
- type: precision_at_100
value: 1.486
- type: precision_at_1000
value: 0.167
- type: mrr_at_1
value: 82.8359
- type: mrr_at_3
value: 86.7207
- type: mrr_at_5
value: 87.1062
- type: mrr_at_10
value: 87.3533
- type: mrr_at_20
value: 87.4411
- type: mrr_at_100
value: 87.4944
- type: mrr_at_1000
value: 87.5012
- type: nauc_ndcg_at_1_max
value: 55.378400000000006
- type: nauc_ndcg_at_1_std
value: -8.999799999999999
- type: nauc_ndcg_at_1_diff1
value: 81.65289999999999
- type: nauc_ndcg_at_3_max
value: 27.530900000000003
- type: nauc_ndcg_at_3_std
value: -1.4845000000000002
- type: nauc_ndcg_at_3_diff1
value: 28.8078
- type: nauc_ndcg_at_5_max
value: 24.8019
- type: nauc_ndcg_at_5_std
value: -0.6705
- type: nauc_ndcg_at_5_diff1
value: 25.1054
- type: nauc_ndcg_at_10_max
value: 22.6678
- type: nauc_ndcg_at_10_std
value: 0.8309000000000001
- type: nauc_ndcg_at_10_diff1
value: 22.1137
- type: nauc_ndcg_at_20_max
value: 21.601200000000002
- type: nauc_ndcg_at_20_std
value: 1.6587
- type: nauc_ndcg_at_20_diff1
value: 20.9774
- type: nauc_ndcg_at_100_max
value: 20.258499999999998
- type: nauc_ndcg_at_100_std
value: 2.4681
- type: nauc_ndcg_at_100_diff1
value: 19.4499
- type: nauc_ndcg_at_1000_max
value: 20.4564
- type: nauc_ndcg_at_1000_std
value: 2.8757
- type: nauc_ndcg_at_1000_diff1
value: 19.674500000000002
- type: nauc_map_at_1_max
value: 55.378400000000006
- type: nauc_map_at_1_std
value: -8.999799999999999
- type: nauc_map_at_1_diff1
value: 81.65289999999999
- type: nauc_map_at_3_max
value: 22.8016
- type: nauc_map_at_3_std
value: -1.3432
- type: nauc_map_at_3_diff1
value: 21.9107
- type: nauc_map_at_5_max
value: 21.0041
- type: nauc_map_at_5_std
value: -0.8455
- type: nauc_map_at_5_diff1
value: 19.5463
- type: nauc_map_at_10_max
value: 19.9533
- type: nauc_map_at_10_std
value: -0.058
- type: nauc_map_at_10_diff1
value: 18.075
- type: nauc_map_at_20_max
value: 19.5951
- type: nauc_map_at_20_std
value: 0.2562
- type: nauc_map_at_20_diff1
value: 17.71
- type: nauc_map_at_100_max
value: 19.3598
- type: nauc_map_at_100_std
value: 0.42960000000000004
- type: nauc_map_at_100_diff1
value: 17.461299999999998
- type: nauc_map_at_1000_max
value: 19.359
- type: nauc_map_at_1000_std
value: 0.451
- type: nauc_map_at_1000_diff1
value: 17.4648
- type: nauc_recall_at_1_max
value: 55.378400000000006
- type: nauc_recall_at_1_std
value: -8.999799999999999
- type: nauc_recall_at_1_diff1
value: 81.65289999999999
- type: nauc_recall_at_3_max
value: 18.226
- type: nauc_recall_at_3_std
value: 0.7939999999999999
- type: nauc_recall_at_3_diff1
value: 12.2289
- type: nauc_recall_at_5_max
value: 12.998999999999999
- type: nauc_recall_at_5_std
value: 2.1354
- type: nauc_recall_at_5_diff1
value: 5.6548
- type: nauc_recall_at_10_max
value: 7.985200000000001
- type: nauc_recall_at_10_std
value: 5.3194
- type: nauc_recall_at_10_diff1
value: -0.9107000000000001
- type: nauc_recall_at_20_max
value: 4.3701
- type: nauc_recall_at_20_std
value: 7.6056
- type: nauc_recall_at_20_diff1
value: -4.7479000000000005
- type: nauc_recall_at_100_max
value: -2.7925
- type: nauc_recall_at_100_std
value: 11.228200000000001
- type: nauc_recall_at_100_diff1
value: -13.4144
- type: nauc_recall_at_1000_max
value: -7.6068
- type: nauc_recall_at_1000_std
value: 17.0487
- type: nauc_recall_at_1000_diff1
value: -21.2775
- type: nauc_precision_at_1_max
value: 55.378400000000006
- type: nauc_precision_at_1_std
value: -8.999799999999999
- type: nauc_precision_at_1_diff1
value: 81.65289999999999
- type: nauc_precision_at_3_max
value: 18.226
- type: nauc_precision_at_3_std
value: 0.7939999999999999
- type: nauc_precision_at_3_diff1
value: 12.2289
- type: nauc_precision_at_5_max
value: 12.998999999999999
- type: nauc_precision_at_5_std
value: 2.1354
- type: nauc_precision_at_5_diff1
value: 5.6548
- type: nauc_precision_at_10_max
value: 7.985200000000001
- type: nauc_precision_at_10_std
value: 5.3194
- type: nauc_precision_at_10_diff1
value: -0.9107000000000001
- type: nauc_precision_at_20_max
value: 4.3701
- type: nauc_precision_at_20_std
value: 7.6056
- type: nauc_precision_at_20_diff1
value: -4.7479000000000005
- type: nauc_precision_at_100_max
value: -2.7925
- type: nauc_precision_at_100_std
value: 11.228200000000001
- type: nauc_precision_at_100_diff1
value: -13.4144
- type: nauc_precision_at_1000_max
value: -7.6068
- type: nauc_precision_at_1000_std
value: 17.0487
- type: nauc_precision_at_1000_diff1
value: -21.2775
- type: nauc_mrr_at_1_max
value: 55.378400000000006
- type: nauc_mrr_at_1_std
value: -8.999799999999999
- type: nauc_mrr_at_1_diff1
value: 81.65289999999999
- type: nauc_mrr_at_3_max
value: 58.457
- type: nauc_mrr_at_3_std
value: -6.3487
- type: nauc_mrr_at_3_diff1
value: 80.559
- type: nauc_mrr_at_5_max
value: 58.4461
- type: nauc_mrr_at_5_std
value: -5.9587
- type: nauc_mrr_at_5_diff1
value: 80.6051
- type: nauc_mrr_at_10_max
value: 58.42659999999999
- type: nauc_mrr_at_10_std
value: -5.6473
- type: nauc_mrr_at_10_diff1
value: 80.6628
- type: nauc_mrr_at_20_max
value: 58.3928
- type: nauc_mrr_at_20_std
value: -5.6386
- type: nauc_mrr_at_20_diff1
value: 80.7154
- type: nauc_mrr_at_100_max
value: 58.341699999999996
- type: nauc_mrr_at_100_std
value: -5.6933
- type: nauc_mrr_at_100_diff1
value: 80.7071
- type: nauc_mrr_at_1000_max
value: 58.3298
- type: nauc_mrr_at_1000_std
value: -5.7103
- type: nauc_mrr_at_1000_diff1
value: 80.7062
- type: main_score
value: 64.464
task:
type: Retrieval
- dataset:
config: default
name: MTEB ImdbClassification (default)
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
split: test
type: mteb/imdb
metrics:
- type: accuracy
value: 94.9352
- type: f1
value: 94.9327
- type: f1_weighted
value: 94.9327
- type: ap
value: 92.00789999999999
- type: ap_weighted
value: 92.00789999999999
- type: main_score
value: 94.9352
task:
type: Classification
- dataset:
config: default
name: MTEB MSMARCO (default)
revision: c5a29a104738b98a9e76336939199e264163d4a0
split: dev
type: mteb/msmarco
metrics:
- type: ndcg_at_1
value: 21.504
- type: ndcg_at_3
value: 32.328
- type: ndcg_at_5
value: 36.452
- type: ndcg_at_10
value: 40.325
- type: ndcg_at_20
value: 43.07
- type: ndcg_at_100
value: 46.23
- type: ndcg_at_1000
value: 47.369
- type: map_at_1
value: 20.909
- type: map_at_3
value: 29.353
- type: map_at_5
value: 31.661
- type: map_at_10
value: 33.28
- type: map_at_20
value: 34.06
- type: map_at_100
value: 34.52
- type: map_at_1000
value: 34.567
- type: recall_at_1
value: 20.909
- type: recall_at_3
value: 40.339000000000006
- type: recall_at_5
value: 50.259
- type: recall_at_10
value: 62.059
- type: recall_at_20
value: 72.693
- type: recall_at_100
value: 89.269
- type: recall_at_1000
value: 97.933
- type: precision_at_1
value: 21.504
- type: precision_at_3
value: 13.944999999999999
- type: precision_at_5
value: 10.461
- type: precision_at_10
value: 6.491
- type: precision_at_20
value: 3.818
- type: precision_at_100
value: 0.943
- type: precision_at_1000
value: 0.104
- type: mrr_at_1
value: 21.5043
- type: mrr_at_3
value: 29.978500000000004
- type: mrr_at_5
value: 32.251400000000004
- type: mrr_at_10
value: 33.8389
- type: mrr_at_20
value: 34.5788
- type: mrr_at_100
value: 35.010200000000005
- type: mrr_at_1000
value: 35.051100000000005
- type: nauc_ndcg_at_1_max
value: -1.0808
- type: nauc_ndcg_at_1_std
value: -22.361900000000002
- type: nauc_ndcg_at_1_diff1
value: 36.9204
- type: nauc_ndcg_at_3_max
value: -2.0822
- type: nauc_ndcg_at_3_std
value: -25.852999999999998
- type: nauc_ndcg_at_3_diff1
value: 30.8521
- type: nauc_ndcg_at_5_max
value: -2.0332
- type: nauc_ndcg_at_5_std
value: -26.375
- type: nauc_ndcg_at_5_diff1
value: 30.1887
- type: nauc_ndcg_at_10_max
value: -2.2974
- type: nauc_ndcg_at_10_std
value: -26.712000000000003
- type: nauc_ndcg_at_10_diff1
value: 30.1484
- type: nauc_ndcg_at_20_max
value: -1.825
- type: nauc_ndcg_at_20_std
value: -25.4078
- type: nauc_ndcg_at_20_diff1
value: 30.1416
- type: nauc_ndcg_at_100_max
value: -1.2328000000000001
- type: nauc_ndcg_at_100_std
value: -23.2039
- type: nauc_ndcg_at_100_diff1
value: 30.348399999999998
- type: nauc_ndcg_at_1000_max
value: -1.2148
- type: nauc_ndcg_at_1000_std
value: -23.8282
- type: nauc_ndcg_at_1000_diff1
value: 30.704900000000002
- type: nauc_map_at_1_max
value: -1.3643
- type: nauc_map_at_1_std
value: -22.5875
- type: nauc_map_at_1_diff1
value: 36.7618
- type: nauc_map_at_3_max
value: -2.0389999999999997
- type: nauc_map_at_3_std
value: -25.2612
- type: nauc_map_at_3_diff1
value: 32.171499999999995
- type: nauc_map_at_5_max
value: -2.0125
- type: nauc_map_at_5_std
value: -25.605800000000002
- type: nauc_map_at_5_diff1
value: 31.8081
- type: nauc_map_at_10_max
value: -2.1288
- type: nauc_map_at_10_std
value: -25.7592
- type: nauc_map_at_10_diff1
value: 31.8241
- type: nauc_map_at_20_max
value: -2.0061
- type: nauc_map_at_20_std
value: -25.4037
- type: nauc_map_at_20_diff1
value: 31.836799999999997
- type: nauc_map_at_100_max
value: -1.9212
- type: nauc_map_at_100_std
value: -25.0965
- type: nauc_map_at_100_diff1
value: 31.8741
- type: nauc_map_at_1000_max
value: -1.9189
- type: nauc_map_at_1000_std
value: -25.111800000000002
- type: nauc_map_at_1000_diff1
value: 31.8865
- type: nauc_recall_at_1_max
value: -1.3643
- type: nauc_recall_at_1_std
value: -22.5875
- type: nauc_recall_at_1_diff1
value: 36.7618
- type: nauc_recall_at_3_max
value: -2.4667000000000003
- type: nauc_recall_at_3_std
value: -27.6077
- type: nauc_recall_at_3_diff1
value: 27.2784
- type: nauc_recall_at_5_max
value: -2.3782
- type: nauc_recall_at_5_std
value: -28.6853
- type: nauc_recall_at_5_diff1
value: 25.5971
- type: nauc_recall_at_10_max
value: -3.2792000000000003
- type: nauc_recall_at_10_std
value: -29.9584
- type: nauc_recall_at_10_diff1
value: 24.7197
- type: nauc_recall_at_20_max
value: -1.2229999999999999
- type: nauc_recall_at_20_std
value: -24.479799999999997
- type: nauc_recall_at_20_diff1
value: 23.377100000000002
- type: nauc_recall_at_100_max
value: 6.815
- type: nauc_recall_at_100_std
value: 5.1981
- type: nauc_recall_at_100_diff1
value: 18.5723
- type: nauc_recall_at_1000_max
value: 38.1041
- type: nauc_recall_at_1000_std
value: 54.1207
- type: nauc_recall_at_1000_diff1
value: 6.8622000000000005
- type: nauc_precision_at_1_max
value: -1.0808
- type: nauc_precision_at_1_std
value: -22.361900000000002
- type: nauc_precision_at_1_diff1
value: 36.9204
- type: nauc_precision_at_3_max
value: -2.2124
- type: nauc_precision_at_3_std
value: -27.3546
- type: nauc_precision_at_3_diff1
value: 27.108700000000002
- type: nauc_precision_at_5_max
value: -1.8263000000000003
- type: nauc_precision_at_5_std
value: -27.977899999999998
- type: nauc_precision_at_5_diff1
value: 24.8638
- type: nauc_precision_at_10_max
value: -2.2207
- type: nauc_precision_at_10_std
value: -27.9458
- type: nauc_precision_at_10_diff1
value: 22.851
- type: nauc_precision_at_20_max
value: 0.5773999999999999
- type: nauc_precision_at_20_std
value: -20.118
- type: nauc_precision_at_20_diff1
value: 19.5377
- type: nauc_precision_at_100_max
value: 9.327399999999999
- type: nauc_precision_at_100_std
value: 8.4253
- type: nauc_precision_at_100_diff1
value: 8.33
- type: nauc_precision_at_1000_max
value: 15.6001
- type: nauc_precision_at_1000_std
value: 18.066
- type: nauc_precision_at_1000_diff1
value: -4.5068
- type: nauc_mrr_at_1_max
value: -1.0808
- type: nauc_mrr_at_1_std
value: -22.361900000000002
- type: nauc_mrr_at_1_diff1
value: 36.9204
- type: nauc_mrr_at_3_max
value: -1.6818
- type: nauc_mrr_at_3_std
value: -24.8193
- type: nauc_mrr_at_3_diff1
value: 32.159
- type: nauc_mrr_at_5_max
value: -1.6575
- type: nauc_mrr_at_5_std
value: -25.0817
- type: nauc_mrr_at_5_diff1
value: 31.800800000000002
- type: nauc_mrr_at_10_max
value: -1.7668
- type: nauc_mrr_at_10_std
value: -25.196800000000003
- type: nauc_mrr_at_10_diff1
value: 31.8144
- type: nauc_mrr_at_20_max
value: -1.6674000000000002
- type: nauc_mrr_at_20_std
value: -24.8741
- type: nauc_mrr_at_20_diff1
value: 31.8324
- type: nauc_mrr_at_100_max
value: -1.6053000000000002
- type: nauc_mrr_at_100_std
value: -24.6091
- type: nauc_mrr_at_100_diff1
value: 31.883
- type: nauc_mrr_at_1000_max
value: -1.6053000000000002
- type: nauc_mrr_at_1000_std
value: -24.627
- type: nauc_mrr_at_1000_diff1
value: 31.896200000000004
- type: main_score
value: 40.325
task:
type: Retrieval
- dataset:
config: en
name: MTEB MTOPDomainClassification (en)
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
split: test
type: mteb/mtop_domain
metrics:
- type: accuracy
value: 96.311
- type: f1
value: 96.0432
- type: f1_weighted
value: 96.3129
- type: main_score
value: 96.311
task:
type: Classification
- dataset:
config: en
name: MTEB MTOPIntentClassification (en)
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
split: test
type: mteb/mtop_intent
metrics:
- type: accuracy
value: 86.5048
- type: f1
value: 67.3883
- type: f1_weighted
value: 88.2687
- type: main_score
value: 86.5048
task:
type: Classification
- dataset:
config: en
name: MTEB MassiveIntentClassification (en)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 75.7902
- type: f1
value: 73.2351
- type: f1_weighted
value: 75.5894
- type: main_score
value: 75.7902
task:
type: Classification
- dataset:
config: en
name: MTEB MassiveScenarioClassification (en)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 77.3571
- type: f1
value: 77.3086
- type: f1_weighted
value: 77.235
- type: main_score
value: 77.3571
task:
type: Classification
- dataset:
config: default
name: MTEB MedrxivClusteringP2P (default)
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
split: test
type: mteb/medrxiv-clustering-p2p
metrics:
- type: v_measure
value: 39.4623
- type: v_measure_std
value: 1.3405
- type: main_score
value: 39.4623
task:
type: Clustering
- dataset:
config: default
name: MTEB MedrxivClusteringS2S (default)
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
split: test
type: mteb/medrxiv-clustering-s2s
metrics:
- type: v_measure
value: 37.5047
- type: v_measure_std
value: 1.2052
- type: main_score
value: 37.5047
task:
type: Clustering
- dataset:
config: default
name: MTEB MindSmallReranking (default)
revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
split: test
type: mteb/mind_small
metrics:
- type: map
value: 28.9125
- type: mrr
value: 29.656900000000004
- type: nAUC_map_max
value: -21.7929
- type: nAUC_map_std
value: -4.2712
- type: nAUC_map_diff1
value: 11.698500000000001
- type: nAUC_mrr_max
value: -16.4251
- type: nAUC_mrr_std
value: -2.1364
- type: nAUC_mrr_diff1
value: 11.3017
- type: main_score
value: 28.9125
task:
type: Reranking
- dataset:
config: default
name: MTEB NFCorpus (default)
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
split: test
type: mteb/nfcorpus
metrics:
- type: ndcg_at_1
value: 44.737
- type: ndcg_at_3
value: 40.943000000000005
- type: ndcg_at_5
value: 38.914
- type: ndcg_at_10
value: 35.762
- type: ndcg_at_20
value: 33.274
- type: ndcg_at_100
value: 32.861000000000004
- type: ndcg_at_1000
value: 41.509
- type: map_at_1
value: 5.792
- type: map_at_3
value: 9.506
- type: map_at_5
value: 11.213
- type: map_at_10
value: 13.165
- type: map_at_20
value: 14.663
- type: map_at_100
value: 16.885
- type: map_at_1000
value: 18.368000000000002
- type: recall_at_1
value: 5.792
- type: recall_at_3
value: 10.517
- type: recall_at_5
value: 13.296
- type: recall_at_10
value: 17.37
- type: recall_at_20
value: 21.22
- type: recall_at_100
value: 33.953
- type: recall_at_1000
value: 65.462
- type: precision_at_1
value: 46.749
- type: precision_at_3
value: 38.596000000000004
- type: precision_at_5
value: 34.303
- type: precision_at_10
value: 26.779999999999998
- type: precision_at_20
value: 19.830000000000002
- type: precision_at_100
value: 8.466999999999999
- type: precision_at_1000
value: 2.12
- type: mrr_at_1
value: 46.7492
- type: mrr_at_3
value: 54.02479999999999
- type: mrr_at_5
value: 55.031
- type: mrr_at_10
value: 55.8081
- type: mrr_at_20
value: 56.143699999999995
- type: mrr_at_100
value: 56.4018
- type: mrr_at_1000
value: 56.4497
- type: nauc_ndcg_at_1_max
value: 54.4799
- type: nauc_ndcg_at_1_std
value: 19.8188
- type: nauc_ndcg_at_1_diff1
value: 35.095
- type: nauc_ndcg_at_3_max
value: 49.5282
- type: nauc_ndcg_at_3_std
value: 19.1444
- type: nauc_ndcg_at_3_diff1
value: 25.074800000000003
- type: nauc_ndcg_at_5_max
value: 50.437200000000004
- type: nauc_ndcg_at_5_std
value: 21.5019
- type: nauc_ndcg_at_5_diff1
value: 21.414
- type: nauc_ndcg_at_10_max
value: 46.907199999999996
- type: nauc_ndcg_at_10_std
value: 22.5521
- type: nauc_ndcg_at_10_diff1
value: 19.0604
- type: nauc_ndcg_at_20_max
value: 47.216
- type: nauc_ndcg_at_20_std
value: 24.535
- type: nauc_ndcg_at_20_diff1
value: 18.3393
- type: nauc_ndcg_at_100_max
value: 47.647
- type: nauc_ndcg_at_100_std
value: 25.7305
- type: nauc_ndcg_at_100_diff1
value: 20.5066
- type: nauc_ndcg_at_1000_max
value: 53.0034
- type: nauc_ndcg_at_1000_std
value: 32.229600000000005
- type: nauc_ndcg_at_1000_diff1
value: 21.729799999999997
- type: nauc_map_at_1_max
value: 18.8513
- type: nauc_map_at_1_std
value: -13.5714
- type: nauc_map_at_1_diff1
value: 42.4674
- type: nauc_map_at_3_max
value: 19.8798
- type: nauc_map_at_3_std
value: -12.600700000000002
- type: nauc_map_at_3_diff1
value: 34.545700000000004
- type: nauc_map_at_5_max
value: 24.756800000000002
- type: nauc_map_at_5_std
value: -7.959099999999999
- type: nauc_map_at_5_diff1
value: 29.1707
- type: nauc_map_at_10_max
value: 28.1916
- type: nauc_map_at_10_std
value: -3.1498
- type: nauc_map_at_10_diff1
value: 25.1522
- type: nauc_map_at_20_max
value: 31.9354
- type: nauc_map_at_20_std
value: 2.319
- type: nauc_map_at_20_diff1
value: 22.778100000000002
- type: nauc_map_at_100_max
value: 35.938700000000004
- type: nauc_map_at_100_std
value: 9.3661
- type: nauc_map_at_100_diff1
value: 21.2726
- type: nauc_map_at_1000_max
value: 36.8531
- type: nauc_map_at_1000_std
value: 12.0615
- type: nauc_map_at_1000_diff1
value: 19.761699999999998
- type: nauc_recall_at_1_max
value: 18.8513
- type: nauc_recall_at_1_std
value: -13.5714
- type: nauc_recall_at_1_diff1
value: 42.4674
- type: nauc_recall_at_3_max
value: 17.405
- type: nauc_recall_at_3_std
value: -11.779399999999999
- type: nauc_recall_at_3_diff1
value: 31.8655
- type: nauc_recall_at_5_max
value: 22.8368
- type: nauc_recall_at_5_std
value: -4.7815
- type: nauc_recall_at_5_diff1
value: 23.4258
- type: nauc_recall_at_10_max
value: 23.6849
- type: nauc_recall_at_10_std
value: 0.1013
- type: nauc_recall_at_10_diff1
value: 18.4986
- type: nauc_recall_at_20_max
value: 27.289400000000004
- type: nauc_recall_at_20_std
value: 7.126200000000001
- type: nauc_recall_at_20_diff1
value: 14.6343
- type: nauc_recall_at_100_max
value: 26.9683
- type: nauc_recall_at_100_std
value: 16.145899999999997
- type: nauc_recall_at_100_diff1
value: 9.705
- type: nauc_recall_at_1000_max
value: 18.4336
- type: nauc_recall_at_1000_std
value: 18.2245
- type: nauc_recall_at_1000_diff1
value: 2.3923
- type: nauc_precision_at_1_max
value: 56.8886
- type: nauc_precision_at_1_std
value: 22.122
- type: nauc_precision_at_1_diff1
value: 33.3152
- type: nauc_precision_at_3_max
value: 47.759299999999996
- type: nauc_precision_at_3_std
value: 23.3157
- type: nauc_precision_at_3_diff1
value: 14.015
- type: nauc_precision_at_5_max
value: 48.8089
- type: nauc_precision_at_5_std
value: 28.7149
- type: nauc_precision_at_5_diff1
value: 6.0146999999999995
- type: nauc_precision_at_10_max
value: 41.620200000000004
- type: nauc_precision_at_10_std
value: 32.275999999999996
- type: nauc_precision_at_10_diff1
value: -0.6839
- type: nauc_precision_at_20_max
value: 39.6123
- type: nauc_precision_at_20_std
value: 37.4586
- type: nauc_precision_at_20_diff1
value: -4.5309
- type: nauc_precision_at_100_max
value: 25.199700000000004
- type: nauc_precision_at_100_std
value: 34.449400000000004
- type: nauc_precision_at_100_diff1
value: -9.290700000000001
- type: nauc_precision_at_1000_max
value: 8.876000000000001
- type: nauc_precision_at_1000_std
value: 20.748
- type: nauc_precision_at_1000_diff1
value: -12.327399999999999
- type: nauc_mrr_at_1_max
value: 56.717600000000004
- type: nauc_mrr_at_1_std
value: 20.7515
- type: nauc_mrr_at_1_diff1
value: 33.3152
- type: nauc_mrr_at_3_max
value: 57.90689999999999
- type: nauc_mrr_at_3_std
value: 25.1369
- type: nauc_mrr_at_3_diff1
value: 31.157
- type: nauc_mrr_at_5_max
value: 59.2569
- type: nauc_mrr_at_5_std
value: 27.054000000000002
- type: nauc_mrr_at_5_diff1
value: 30.840400000000002
- type: nauc_mrr_at_10_max
value: 59.44819999999999
- type: nauc_mrr_at_10_std
value: 27.903299999999998
- type: nauc_mrr_at_10_diff1
value: 31.4959
- type: nauc_mrr_at_20_max
value: 59.7104
- type: nauc_mrr_at_20_std
value: 28.2328
- type: nauc_mrr_at_20_diff1
value: 31.330099999999998
- type: nauc_mrr_at_100_max
value: 59.573600000000006
- type: nauc_mrr_at_100_std
value: 28.044900000000002
- type: nauc_mrr_at_100_diff1
value: 31.305100000000003
- type: nauc_mrr_at_1000_max
value: 59.5608
- type: nauc_mrr_at_1000_std
value: 28.0034
- type: nauc_mrr_at_1000_diff1
value: 31.314199999999996
- type: main_score
value: 35.762
task:
type: Retrieval
- dataset:
config: default
name: MTEB NQ (default)
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
split: test
type: mteb/nq
metrics:
- type: ndcg_at_1
value: 39.89
- type: ndcg_at_3
value: 51.121
- type: ndcg_at_5
value: 55.184
- type: ndcg_at_10
value: 58.63699999999999
- type: ndcg_at_20
value: 60.659
- type: ndcg_at_100
value: 62.429
- type: ndcg_at_1000
value: 62.965
- type: map_at_1
value: 35.361
- type: map_at_3
value: 47.071000000000005
- type: map_at_5
value: 49.571
- type: map_at_10
value: 51.178999999999995
- type: map_at_20
value: 51.827999999999996
- type: map_at_100
value: 52.117000000000004
- type: map_at_1000
value: 52.141000000000005
- type: recall_at_1
value: 35.361
- type: recall_at_3
value: 59.40299999999999
- type: recall_at_5
value: 68.721
- type: recall_at_10
value: 78.64
- type: recall_at_20
value: 86.066
- type: recall_at_100
value: 94.865
- type: recall_at_1000
value: 98.79299999999999
- type: precision_at_1
value: 39.89
- type: precision_at_3
value: 23.078000000000003
- type: precision_at_5
value: 16.182
- type: precision_at_10
value: 9.363000000000001
- type: precision_at_20
value: 5.165
- type: precision_at_100
value: 1.15
- type: precision_at_1000
value: 0.12
- type: mrr_at_1
value: 39.8899
- type: mrr_at_3
value: 50.507000000000005
- type: mrr_at_5
value: 52.4899
- type: mrr_at_10
value: 53.761700000000005
- type: mrr_at_20
value: 54.223600000000005
- type: mrr_at_100
value: 54.427800000000005
- type: mrr_at_1000
value: 54.443299999999994
- type: nauc_ndcg_at_1_max
value: 19.524
- type: nauc_ndcg_at_1_std
value: -5.1782
- type: nauc_ndcg_at_1_diff1
value: 35.5793
- type: nauc_ndcg_at_3_max
value: 24.2974
- type: nauc_ndcg_at_3_std
value: -5.2507
- type: nauc_ndcg_at_3_diff1
value: 29.9937
- type: nauc_ndcg_at_5_max
value: 26.502100000000002
- type: nauc_ndcg_at_5_std
value: -3.6393
- type: nauc_ndcg_at_5_diff1
value: 30.0319
- type: nauc_ndcg_at_10_max
value: 26.66
- type: nauc_ndcg_at_10_std
value: -2.3816
- type: nauc_ndcg_at_10_diff1
value: 30.678100000000004
- type: nauc_ndcg_at_20_max
value: 26.9991
- type: nauc_ndcg_at_20_std
value: -1.5933
- type: nauc_ndcg_at_20_diff1
value: 30.824
- type: nauc_ndcg_at_100_max
value: 26.879199999999997
- type: nauc_ndcg_at_100_std
value: -0.8982
- type: nauc_ndcg_at_100_diff1
value: 31.338
- type: nauc_ndcg_at_1000_max
value: 26.2157
- type: nauc_ndcg_at_1000_std
value: -1.6907999999999999
- type: nauc_ndcg_at_1000_diff1
value: 31.428099999999997
- type: nauc_map_at_1_max
value: 17.2868
- type: nauc_map_at_1_std
value: -7.0931
- type: nauc_map_at_1_diff1
value: 35.9826
- type: nauc_map_at_3_max
value: 23.0406
- type: nauc_map_at_3_std
value: -5.973599999999999
- type: nauc_map_at_3_diff1
value: 31.9658
- type: nauc_map_at_5_max
value: 24.3828
- type: nauc_map_at_5_std
value: -4.8592
- type: nauc_map_at_5_diff1
value: 31.9392
- type: nauc_map_at_10_max
value: 24.4782
- type: nauc_map_at_10_std
value: -4.2431
- type: nauc_map_at_10_diff1
value: 32.130399999999995
- type: nauc_map_at_20_max
value: 24.5589
- type: nauc_map_at_20_std
value: -3.9991
- type: nauc_map_at_20_diff1
value: 32.201299999999996
- type: nauc_map_at_100_max
value: 24.5696
- type: nauc_map_at_100_std
value: -3.8531999999999997
- type: nauc_map_at_100_diff1
value: 32.284
- type: nauc_map_at_1000_max
value: 24.546599999999998
- type: nauc_map_at_1000_std
value: -3.8784
- type: nauc_map_at_1000_diff1
value: 32.2879
- type: nauc_recall_at_1_max
value: 17.2868
- type: nauc_recall_at_1_std
value: -7.0931
- type: nauc_recall_at_1_diff1
value: 35.9826
- type: nauc_recall_at_3_max
value: 26.753300000000003
- type: nauc_recall_at_3_std
value: -5.1822
- type: nauc_recall_at_3_diff1
value: 24.4274
- type: nauc_recall_at_5_max
value: 32.697900000000004
- type: nauc_recall_at_5_std
value: -1.4673
- type: nauc_recall_at_5_diff1
value: 23.5655
- type: nauc_recall_at_10_max
value: 35.22
- type: nauc_recall_at_10_std
value: 3.6904
- type: nauc_recall_at_10_diff1
value: 24.5926
- type: nauc_recall_at_20_max
value: 42.0975
- type: nauc_recall_at_20_std
value: 11.574
- type: nauc_recall_at_20_diff1
value: 23.5964
- type: nauc_recall_at_100_max
value: 62.5657
- type: nauc_recall_at_100_std
value: 45.2673
- type: nauc_recall_at_100_diff1
value: 26.6811
- type: nauc_recall_at_1000_max
value: 78.6598
- type: nauc_recall_at_1000_std
value: 70.7318
- type: nauc_recall_at_1000_diff1
value: 29.530099999999997
- type: nauc_precision_at_1_max
value: 19.524
- type: nauc_precision_at_1_std
value: -5.1782
- type: nauc_precision_at_1_diff1
value: 35.5793
- type: nauc_precision_at_3_max
value: 27.230999999999998
- type: nauc_precision_at_3_std
value: 0.13649999999999998
- type: nauc_precision_at_3_diff1
value: 18.817500000000003
- type: nauc_precision_at_5_max
value: 28.734700000000004
- type: nauc_precision_at_5_std
value: 5.1929
- type: nauc_precision_at_5_diff1
value: 14.3006
- type: nauc_precision_at_10_max
value: 25.3071
- type: nauc_precision_at_10_std
value: 11.0166
- type: nauc_precision_at_10_diff1
value: 9.481
- type: nauc_precision_at_20_max
value: 22.5098
- type: nauc_precision_at_20_std
value: 15.695400000000001
- type: nauc_precision_at_20_diff1
value: 4.5483
- type: nauc_precision_at_100_max
value: 15.834999999999999
- type: nauc_precision_at_100_std
value: 21.391099999999998
- type: nauc_precision_at_100_diff1
value: -2.3594
- type: nauc_precision_at_1000_max
value: 7.2892
- type: nauc_precision_at_1000_std
value: 16.1876
- type: nauc_precision_at_1000_diff1
value: -6.698900000000001
- type: nauc_mrr_at_1_max
value: 19.524
- type: nauc_mrr_at_1_std
value: -5.1782
- type: nauc_mrr_at_1_diff1
value: 35.5793
- type: nauc_mrr_at_3_max
value: 23.3415
- type: nauc_mrr_at_3_std
value: -3.7981000000000003
- type: nauc_mrr_at_3_diff1
value: 30.531799999999997
- type: nauc_mrr_at_5_max
value: 24.2743
- type: nauc_mrr_at_5_std
value: -3.1985
- type: nauc_mrr_at_5_diff1
value: 30.7564
- type: nauc_mrr_at_10_max
value: 24.1952
- type: nauc_mrr_at_10_std
value: -2.9042
- type: nauc_mrr_at_10_diff1
value: 31.2183
- type: nauc_mrr_at_20_max
value: 24.2339
- type: nauc_mrr_at_20_std
value: -2.8143000000000002
- type: nauc_mrr_at_20_diff1
value: 31.252999999999997
- type: nauc_mrr_at_100_max
value: 24.1954
- type: nauc_mrr_at_100_std
value: -2.7797
- type: nauc_mrr_at_100_diff1
value: 31.3283
- type: nauc_mrr_at_1000_max
value: 24.1793
- type: nauc_mrr_at_1000_std
value: -2.7987
- type: nauc_mrr_at_1000_diff1
value: 31.330099999999998
- type: main_score
value: 58.63699999999999
task:
type: Retrieval
- dataset:
config: default
name: MTEB QuoraRetrieval (default)
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
split: test
type: mteb/quora
metrics:
- type: ndcg_at_1
value: 83.33
- type: ndcg_at_3
value: 87.21900000000001
- type: ndcg_at_5
value: 88.725
- type: ndcg_at_10
value: 89.848
- type: ndcg_at_20
value: 90.426
- type: ndcg_at_100
value: 90.881
- type: ndcg_at_1000
value: 90.947
- type: map_at_1
value: 72.354
- type: map_at_3
value: 83.447
- type: map_at_5
value: 85.3
- type: map_at_10
value: 86.33800000000001
- type: map_at_20
value: 86.752
- type: map_at_100
value: 86.952
- type: map_at_1000
value: 86.965
- type: recall_at_1
value: 72.354
- type: recall_at_3
value: 88.726
- type: recall_at_5
value: 93.07900000000001
- type: recall_at_10
value: 96.392
- type: recall_at_20
value: 98.185
- type: recall_at_100
value: 99.737
- type: recall_at_1000
value: 99.994
- type: precision_at_1
value: 83.33
- type: precision_at_3
value: 38.163000000000004
- type: precision_at_5
value: 25.054
- type: precision_at_10
value: 13.600000000000001
- type: precision_at_20
value: 7.199999999999999
- type: precision_at_100
value: 1.539
- type: precision_at_1000
value: 0.157
- type: mrr_at_1
value: 83.33
- type: mrr_at_3
value: 88.2583
- type: mrr_at_5
value: 88.8703
- type: mrr_at_10
value: 89.1146
- type: mrr_at_20
value: 89.1631
- type: mrr_at_100
value: 89.1825
- type: mrr_at_1000
value: 89.1829
- type: nauc_ndcg_at_1_max
value: 35.1345
- type: nauc_ndcg_at_1_std
value: -51.2196
- type: nauc_ndcg_at_1_diff1
value: 78.4909
- type: nauc_ndcg_at_3_max
value: 32.547399999999996
- type: nauc_ndcg_at_3_std
value: -59.377500000000005
- type: nauc_ndcg_at_3_diff1
value: 76.46300000000001
- type: nauc_ndcg_at_5_max
value: 33.5504
- type: nauc_ndcg_at_5_std
value: -60.3836
- type: nauc_ndcg_at_5_diff1
value: 76.9467
- type: nauc_ndcg_at_10_max
value: 34.1371
- type: nauc_ndcg_at_10_std
value: -59.3526
- type: nauc_ndcg_at_10_diff1
value: 77.1373
- type: nauc_ndcg_at_20_max
value: 34.5537
- type: nauc_ndcg_at_20_std
value: -57.8514
- type: nauc_ndcg_at_20_diff1
value: 77.2059
- type: nauc_ndcg_at_100_max
value: 34.8817
- type: nauc_ndcg_at_100_std
value: -55.6778
- type: nauc_ndcg_at_100_diff1
value: 77.08080000000001
- type: nauc_ndcg_at_1000_max
value: 35.0003
- type: nauc_ndcg_at_1000_std
value: -55.292699999999996
- type: nauc_ndcg_at_1000_diff1
value: 77.078
- type: nauc_map_at_1_max
value: 24.889400000000002
- type: nauc_map_at_1_std
value: -50.5244
- type: nauc_map_at_1_diff1
value: 80.9461
- type: nauc_map_at_3_max
value: 30.461899999999996
- type: nauc_map_at_3_std
value: -61.017999999999994
- type: nauc_map_at_3_diff1
value: 77.8986
- type: nauc_map_at_5_max
value: 31.995800000000003
- type: nauc_map_at_5_std
value: -61.0579
- type: nauc_map_at_5_diff1
value: 77.6265
- type: nauc_map_at_10_max
value: 32.9371
- type: nauc_map_at_10_std
value: -59.662099999999995
- type: nauc_map_at_10_diff1
value: 77.3695
- type: nauc_map_at_20_max
value: 33.3268
- type: nauc_map_at_20_std
value: -58.4642
- type: nauc_map_at_20_diff1
value: 77.2616
- type: nauc_map_at_100_max
value: 33.481300000000005
- type: nauc_map_at_100_std
value: -57.51349999999999
- type: nauc_map_at_100_diff1
value: 77.1762
- type: nauc_map_at_1000_max
value: 33.51
- type: nauc_map_at_1000_std
value: -57.4361
- type: nauc_map_at_1000_diff1
value: 77.173
- type: nauc_recall_at_1_max
value: 24.889400000000002
- type: nauc_recall_at_1_std
value: -50.5244
- type: nauc_recall_at_1_diff1
value: 80.9461
- type: nauc_recall_at_3_max
value: 26.490399999999998
- type: nauc_recall_at_3_std
value: -70.6466
- type: nauc_recall_at_3_diff1
value: 74.3857
- type: nauc_recall_at_5_max
value: 28.3327
- type: nauc_recall_at_5_std
value: -77.8455
- type: nauc_recall_at_5_diff1
value: 73.348
- type: nauc_recall_at_10_max
value: 30.476999999999997
- type: nauc_recall_at_10_std
value: -84.933
- type: nauc_recall_at_10_diff1
value: 73.7724
- type: nauc_recall_at_20_max
value: 31.954700000000003
- type: nauc_recall_at_20_std
value: -88.4871
- type: nauc_recall_at_20_diff1
value: 75.3748
- type: nauc_recall_at_100_max
value: 26.290799999999997
- type: nauc_recall_at_100_std
value: -86.7429
- type: nauc_recall_at_100_diff1
value: 71.1186
- type: nauc_recall_at_1000_max
value: -46.823100000000004
- type: nauc_recall_at_1000_std
value: -34.474
- type: nauc_recall_at_1000_diff1
value: 43.9622
- type: nauc_precision_at_1_max
value: 35.1345
- type: nauc_precision_at_1_std
value: -51.2196
- type: nauc_precision_at_1_diff1
value: 78.4909
- type: nauc_precision_at_3_max
value: 5.0033
- type: nauc_precision_at_3_std
value: 6.1183000000000005
- type: nauc_precision_at_3_diff1
value: -23.093
- type: nauc_precision_at_5_max
value: 0.8462000000000001
- type: nauc_precision_at_5_std
value: 19.284599999999998
- type: nauc_precision_at_5_diff1
value: -34.740700000000004
- type: nauc_precision_at_10_max
value: -2.476
- type: nauc_precision_at_10_std
value: 30.449900000000003
- type: nauc_precision_at_10_diff1
value: -41.373
- type: nauc_precision_at_20_max
value: -4.067
- type: nauc_precision_at_20_std
value: 37.2089
- type: nauc_precision_at_20_diff1
value: -43.4846
- type: nauc_precision_at_100_max
value: -5.4187
- type: nauc_precision_at_100_std
value: 44.7639
- type: nauc_precision_at_100_diff1
value: -44.9325
- type: nauc_precision_at_1000_max
value: -5.309
- type: nauc_precision_at_1000_std
value: 46.4094
- type: nauc_precision_at_1000_diff1
value: -45.0127
- type: nauc_mrr_at_1_max
value: 35.1345
- type: nauc_mrr_at_1_std
value: -51.2196
- type: nauc_mrr_at_1_diff1
value: 78.4909
- type: nauc_mrr_at_3_max
value: 35.5355
- type: nauc_mrr_at_3_std
value: -54.636399999999995
- type: nauc_mrr_at_3_diff1
value: 77.537
- type: nauc_mrr_at_5_max
value: 35.8853
- type: nauc_mrr_at_5_std
value: -54.1871
- type: nauc_mrr_at_5_diff1
value: 77.6977
- type: nauc_mrr_at_10_max
value: 35.8488
- type: nauc_mrr_at_10_std
value: -53.825599999999994
- type: nauc_mrr_at_10_diff1
value: 77.7459
- type: nauc_mrr_at_20_max
value: 35.7887
- type: nauc_mrr_at_20_std
value: -53.778800000000004
- type: nauc_mrr_at_20_diff1
value: 77.7606
- type: nauc_mrr_at_100_max
value: 35.7656
- type: nauc_mrr_at_100_std
value: -53.74640000000001
- type: nauc_mrr_at_100_diff1
value: 77.7597
- type: nauc_mrr_at_1000_max
value: 35.7642
- type: nauc_mrr_at_1000_std
value: -53.744899999999994
- type: nauc_mrr_at_1000_diff1
value: 77.7598
- type: main_score
value: 89.848
task:
type: Retrieval
- dataset:
config: default
name: MTEB RedditClustering (default)
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
split: test
type: mteb/reddit-clustering
metrics:
- type: v_measure
value: 58.794599999999996
- type: v_measure_std
value: 3.7606
- type: main_score
value: 58.794599999999996
task:
type: Clustering
- dataset:
config: default
name: MTEB RedditClusteringP2P (default)
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
split: test
type: mteb/reddit-clustering-p2p
metrics:
- type: v_measure
value: 65.4871
- type: v_measure_std
value: 13.1853
- type: main_score
value: 65.4871
task:
type: Clustering
- dataset:
config: default
name: MTEB SCIDOCS (default)
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
split: test
type: mteb/scidocs
metrics:
- type: ndcg_at_1
value: 26.0
- type: ndcg_at_3
value: 21.369
- type: ndcg_at_5
value: 18.865000000000002
- type: ndcg_at_10
value: 22.847
- type: ndcg_at_20
value: 25.817
- type: ndcg_at_100
value: 31.824
- type: ndcg_at_1000
value: 37.997
- type: map_at_1
value: 5.268
- type: map_at_3
value: 9.604
- type: map_at_5
value: 11.797
- type: map_at_10
value: 13.891
- type: map_at_20
value: 15.062000000000001
- type: map_at_100
value: 16.323
- type: map_at_1000
value: 16.71
- type: recall_at_1
value: 5.268
- type: recall_at_3
value: 12.203
- type: recall_at_5
value: 16.963
- type: recall_at_10
value: 24.29
- type: recall_at_20
value: 31.267
- type: recall_at_100
value: 50.727
- type: recall_at_1000
value: 80.67800000000001
- type: precision_at_1
value: 26.0
- type: precision_at_3
value: 20.067
- type: precision_at_5
value: 16.74
- type: precision_at_10
value: 11.97
- type: precision_at_20
value: 7.7
- type: precision_at_100
value: 2.4979999999999998
- type: precision_at_1000
value: 0.398
- type: mrr_at_1
value: 26.0
- type: mrr_at_3
value: 34.2833
- type: mrr_at_5
value: 35.9333
- type: mrr_at_10
value: 37.5791
- type: mrr_at_20
value: 38.1301
- type: mrr_at_100
value: 38.556200000000004
- type: mrr_at_1000
value: 38.606899999999996
- type: nauc_ndcg_at_1_max
value: 21.9327
- type: nauc_ndcg_at_1_std
value: 8.761800000000001
- type: nauc_ndcg_at_1_diff1
value: 22.0695
- type: nauc_ndcg_at_3_max
value: 27.475300000000004
- type: nauc_ndcg_at_3_std
value: 11.126
- type: nauc_ndcg_at_3_diff1
value: 17.1458
- type: nauc_ndcg_at_5_max
value: 28.116200000000003
- type: nauc_ndcg_at_5_std
value: 13.919799999999999
- type: nauc_ndcg_at_5_diff1
value: 15.894400000000001
- type: nauc_ndcg_at_10_max
value: 30.3757
- type: nauc_ndcg_at_10_std
value: 17.2527
- type: nauc_ndcg_at_10_diff1
value: 14.1508
- type: nauc_ndcg_at_20_max
value: 31.451600000000003
- type: nauc_ndcg_at_20_std
value: 19.9009
- type: nauc_ndcg_at_20_diff1
value: 13.5029
- type: nauc_ndcg_at_100_max
value: 33.9342
- type: nauc_ndcg_at_100_std
value: 25.7798
- type: nauc_ndcg_at_100_diff1
value: 14.335500000000001
- type: nauc_ndcg_at_1000_max
value: 33.5581
- type: nauc_ndcg_at_1000_std
value: 25.082300000000004
- type: nauc_ndcg_at_1000_diff1
value: 14.223099999999999
- type: nauc_map_at_1_max
value: 22.0412
- type: nauc_map_at_1_std
value: 8.932
- type: nauc_map_at_1_diff1
value: 22.2384
- type: nauc_map_at_3_max
value: 26.761400000000002
- type: nauc_map_at_3_std
value: 9.1566
- type: nauc_map_at_3_diff1
value: 17.2375
- type: nauc_map_at_5_max
value: 27.7594
- type: nauc_map_at_5_std
value: 12.6506
- type: nauc_map_at_5_diff1
value: 15.739600000000001
- type: nauc_map_at_10_max
value: 29.6498
- type: nauc_map_at_10_std
value: 15.2716
- type: nauc_map_at_10_diff1
value: 14.638000000000002
- type: nauc_map_at_20_max
value: 30.1827
- type: nauc_map_at_20_std
value: 16.7742
- type: nauc_map_at_20_diff1
value: 14.0863
- type: nauc_map_at_100_max
value: 31.3787
- type: nauc_map_at_100_std
value: 19.3168
- type: nauc_map_at_100_diff1
value: 14.3807
- type: nauc_map_at_1000_max
value: 31.3749
- type: nauc_map_at_1000_std
value: 19.4008
- type: nauc_map_at_1000_diff1
value: 14.3151
- type: nauc_recall_at_1_max
value: 22.0412
- type: nauc_recall_at_1_std
value: 8.932
- type: nauc_recall_at_1_diff1
value: 22.2384
- type: nauc_recall_at_3_max
value: 29.4548
- type: nauc_recall_at_3_std
value: 12.4116
- type: nauc_recall_at_3_diff1
value: 14.9834
- type: nauc_recall_at_5_max
value: 28.7014
- type: nauc_recall_at_5_std
value: 16.1355
- type: nauc_recall_at_5_diff1
value: 12.4951
- type: nauc_recall_at_10_max
value: 31.2425
- type: nauc_recall_at_10_std
value: 21.3563
- type: nauc_recall_at_10_diff1
value: 9.0205
- type: nauc_recall_at_20_max
value: 31.478
- type: nauc_recall_at_20_std
value: 25.4813
- type: nauc_recall_at_20_diff1
value: 7.3628
- type: nauc_recall_at_100_max
value: 33.596199999999996
- type: nauc_recall_at_100_std
value: 37.5122
- type: nauc_recall_at_100_diff1
value: 8.3252
- type: nauc_recall_at_1000_max
value: 30.4869
- type: nauc_recall_at_1000_std
value: 38.8306
- type: nauc_recall_at_1000_diff1
value: 4.6079
- type: nauc_precision_at_1_max
value: 21.9327
- type: nauc_precision_at_1_std
value: 8.761800000000001
- type: nauc_precision_at_1_diff1
value: 22.0695
- type: nauc_precision_at_3_max
value: 29.608600000000003
- type: nauc_precision_at_3_std
value: 12.3347
- type: nauc_precision_at_3_diff1
value: 14.810200000000002
- type: nauc_precision_at_5_max
value: 28.8061
- type: nauc_precision_at_5_std
value: 16.0502
- type: nauc_precision_at_5_diff1
value: 12.251900000000001
- type: nauc_precision_at_10_max
value: 31.3513
- type: nauc_precision_at_10_std
value: 21.226300000000002
- type: nauc_precision_at_10_diff1
value: 8.772499999999999
- type: nauc_precision_at_20_max
value: 31.692999999999998
- type: nauc_precision_at_20_std
value: 25.4628
- type: nauc_precision_at_20_diff1
value: 7.1315
- type: nauc_precision_at_100_max
value: 33.3115
- type: nauc_precision_at_100_std
value: 36.888799999999996
- type: nauc_precision_at_100_diff1
value: 7.820100000000001
- type: nauc_precision_at_1000_max
value: 29.1927
- type: nauc_precision_at_1000_std
value: 36.2523
- type: nauc_precision_at_1000_diff1
value: 3.5833999999999997
- type: nauc_mrr_at_1_max
value: 21.9327
- type: nauc_mrr_at_1_std
value: 8.761800000000001
- type: nauc_mrr_at_1_diff1
value: 22.0695
- type: nauc_mrr_at_3_max
value: 26.1187
- type: nauc_mrr_at_3_std
value: 12.5639
- type: nauc_mrr_at_3_diff1
value: 19.642599999999998
- type: nauc_mrr_at_5_max
value: 25.8562
- type: nauc_mrr_at_5_std
value: 12.495000000000001
- type: nauc_mrr_at_5_diff1
value: 19.3465
- type: nauc_mrr_at_10_max
value: 26.218200000000003
- type: nauc_mrr_at_10_std
value: 13.1243
- type: nauc_mrr_at_10_diff1
value: 18.9542
- type: nauc_mrr_at_20_max
value: 26.422099999999997
- type: nauc_mrr_at_20_std
value: 13.4214
- type: nauc_mrr_at_20_diff1
value: 19.0105
- type: nauc_mrr_at_100_max
value: 26.338
- type: nauc_mrr_at_100_std
value: 13.4264
- type: nauc_mrr_at_100_diff1
value: 18.9729
- type: nauc_mrr_at_1000_max
value: 26.3327
- type: nauc_mrr_at_1000_std
value: 13.3904
- type: nauc_mrr_at_1000_diff1
value: 19.004199999999997
- type: main_score
value: 22.847
task:
type: Retrieval
- dataset:
config: default
name: MTEB SICK-R (default)
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
split: test
type: mteb/sickr-sts
metrics:
- type: pearson
value: 81.13050000000001
- type: spearman
value: 79.01310000000001
- type: cosine_pearson
value: 81.13050000000001
- type: cosine_spearman
value: 79.01310000000001
- type: manhattan_pearson
value: 79.03999999999999
- type: manhattan_spearman
value: 79.1744
- type: euclidean_pearson
value: 79.0977
- type: euclidean_spearman
value: 79.2268
- type: main_score
value: 79.01310000000001
task:
type: STS
- dataset:
config: default
name: MTEB STS12 (default)
revision: a0d554a64d88156834ff5ae9920b964011b16384
split: test
type: mteb/sts12-sts
metrics:
- type: pearson
value: 86.9675
- type: spearman
value: 80.3531
- type: cosine_pearson
value: 86.9675
- type: cosine_spearman
value: 80.3531
- type: manhattan_pearson
value: 82.2315
- type: manhattan_spearman
value: 79.7004
- type: euclidean_pearson
value: 82.3305
- type: euclidean_spearman
value: 79.8601
- type: main_score
value: 80.3531
task:
type: STS
- dataset:
config: default
name: MTEB STS13 (default)
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
split: test
type: mteb/sts13-sts
metrics:
- type: pearson
value: 85.6041
- type: spearman
value: 86.0453
- type: cosine_pearson
value: 85.6041
- type: cosine_spearman
value: 86.0453
- type: manhattan_pearson
value: 85.2548
- type: manhattan_spearman
value: 85.8908
- type: euclidean_pearson
value: 85.253
- type: euclidean_spearman
value: 85.9181
- type: main_score
value: 86.0453
task:
type: STS
- dataset:
config: default
name: MTEB STS14 (default)
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
split: test
type: mteb/sts14-sts
metrics:
- type: pearson
value: 82.8792
- type: spearman
value: 82.9681
- type: cosine_pearson
value: 82.8792
- type: cosine_spearman
value: 82.9681
- type: manhattan_pearson
value: 81.4789
- type: manhattan_spearman
value: 82.4797
- type: euclidean_pearson
value: 81.4674
- type: euclidean_spearman
value: 82.4547
- type: main_score
value: 82.9681
task:
type: STS
- dataset:
config: default
name: MTEB STS15 (default)
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
split: test
type: mteb/sts15-sts
metrics:
- type: pearson
value: 87.5356
- type: spearman
value: 88.06540000000001
- type: cosine_pearson
value: 87.5356
- type: cosine_spearman
value: 88.06540000000001
- type: manhattan_pearson
value: 87.10759999999999
- type: manhattan_spearman
value: 87.75309999999999
- type: euclidean_pearson
value: 87.1489
- type: euclidean_spearman
value: 87.7857
- type: main_score
value: 88.06540000000001
task:
type: STS
- dataset:
config: default
name: MTEB STS16 (default)
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
split: test
type: mteb/sts16-sts
metrics:
- type: pearson
value: 85.0208
- type: spearman
value: 86.0136
- type: cosine_pearson
value: 85.0208
- type: cosine_spearman
value: 86.0136
- type: manhattan_pearson
value: 85.22
- type: manhattan_spearman
value: 86.1101
- type: euclidean_pearson
value: 85.2043
- type: euclidean_spearman
value: 86.113
- type: main_score
value: 86.0136
task:
type: STS
- dataset:
config: en-en
name: MTEB STS17 (en-en)
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
split: test
type: mteb/sts17-crosslingual-sts
metrics:
- type: pearson
value: 89.4083
- type: spearman
value: 88.9498
- type: cosine_pearson
value: 89.4083
- type: cosine_spearman
value: 88.9498
- type: manhattan_pearson
value: 89.46539999999999
- type: manhattan_spearman
value: 88.8754
- type: euclidean_pearson
value: 89.4326
- type: euclidean_spearman
value: 88.8148
- type: main_score
value: 88.9498
task:
type: STS
- dataset:
config: en
name: MTEB STS22 (en)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: pearson
value: 66.60770000000001
- type: spearman
value: 67.1515
- type: cosine_pearson
value: 66.60770000000001
- type: cosine_spearman
value: 67.1515
- type: manhattan_pearson
value: 66.5604
- type: manhattan_spearman
value: 66.4621
- type: euclidean_pearson
value: 66.4628
- type: euclidean_spearman
value: 66.2979
- type: main_score
value: 67.1515
task:
type: STS
- dataset:
config: default
name: MTEB STSBenchmark (default)
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
split: test
type: mteb/stsbenchmark-sts
metrics:
- type: pearson
value: 86.86399999999999
- type: spearman
value: 87.7139
- type: cosine_pearson
value: 86.86399999999999
- type: cosine_spearman
value: 87.7139
- type: manhattan_pearson
value: 86.6602
- type: manhattan_spearman
value: 87.2606
- type: euclidean_pearson
value: 86.5924
- type: euclidean_spearman
value: 87.241
- type: main_score
value: 87.7139
task:
type: STS
- dataset:
config: default
name: MTEB SciDocsRR (default)
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
split: test
type: mteb/scidocs-reranking
metrics:
- type: map
value: 84.37360000000001
- type: mrr
value: 95.6275
- type: nAUC_map_max
value: 52.991699999999994
- type: nAUC_map_std
value: 66.8168
- type: nAUC_map_diff1
value: -3.2009999999999996
- type: nAUC_mrr_max
value: 85.7492
- type: nAUC_mrr_std
value: 77.3543
- type: nAUC_mrr_diff1
value: 38.014700000000005
- type: main_score
value: 84.37360000000001
task:
type: Reranking
- dataset:
config: default
name: MTEB SciFact (default)
revision: 0228b52cf27578f30900b9e5271d331663a030d7
split: test
type: mteb/scifact
metrics:
- type: ndcg_at_1
value: 60.667
- type: ndcg_at_3
value: 68.209
- type: ndcg_at_5
value: 71.409
- type: ndcg_at_10
value: 73.476
- type: ndcg_at_20
value: 74.339
- type: ndcg_at_100
value: 75.57000000000001
- type: ndcg_at_1000
value: 75.955
- type: map_at_1
value: 58.178
- type: map_at_3
value: 65.71900000000001
- type: map_at_5
value: 67.73
- type: map_at_10
value: 68.821
- type: map_at_20
value: 69.07600000000001
- type: map_at_100
value: 69.245
- type: map_at_1000
value: 69.258
- type: recall_at_1
value: 58.178
- type: recall_at_3
value: 73.172
- type: recall_at_5
value: 81.0
- type: recall_at_10
value: 86.867
- type: recall_at_20
value: 90.267
- type: recall_at_100
value: 96.933
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 60.667
- type: precision_at_3
value: 26.444000000000003
- type: precision_at_5
value: 18.0
- type: precision_at_10
value: 9.866999999999999
- type: precision_at_20
value: 5.133
- type: precision_at_100
value: 1.097
- type: precision_at_1000
value: 0.11299999999999999
- type: mrr_at_1
value: 60.6667
- type: mrr_at_3
value: 67.1667
- type: mrr_at_5
value: 68.85
- type: mrr_at_10
value: 69.4799
- type: mrr_at_20
value: 69.6658
- type: mrr_at_100
value: 69.8134
- type: mrr_at_1000
value: 69.8257
- type: nauc_ndcg_at_1_max
value: 49.3608
- type: nauc_ndcg_at_1_std
value: 12.742400000000002
- type: nauc_ndcg_at_1_diff1
value: 74.5012
- type: nauc_ndcg_at_3_max
value: 49.524499999999996
- type: nauc_ndcg_at_3_std
value: 7.7241
- type: nauc_ndcg_at_3_diff1
value: 72.0127
- type: nauc_ndcg_at_5_max
value: 51.897099999999995
- type: nauc_ndcg_at_5_std
value: 12.8641
- type: nauc_ndcg_at_5_diff1
value: 69.7789
- type: nauc_ndcg_at_10_max
value: 55.1141
- type: nauc_ndcg_at_10_std
value: 17.136499999999998
- type: nauc_ndcg_at_10_diff1
value: 68.8711
- type: nauc_ndcg_at_20_max
value: 54.74719999999999
- type: nauc_ndcg_at_20_std
value: 17.0485
- type: nauc_ndcg_at_20_diff1
value: 69.4701
- type: nauc_ndcg_at_100_max
value: 53.7619
- type: nauc_ndcg_at_100_std
value: 15.335299999999998
- type: nauc_ndcg_at_100_diff1
value: 70.34479999999999
- type: nauc_ndcg_at_1000_max
value: 53.4516
- type: nauc_ndcg_at_1000_std
value: 14.7843
- type: nauc_ndcg_at_1000_diff1
value: 70.6041
- type: nauc_map_at_1_max
value: 44.9654
- type: nauc_map_at_1_std
value: 5.9821
- type: nauc_map_at_1_diff1
value: 76.2581
- type: nauc_map_at_3_max
value: 47.515299999999996
- type: nauc_map_at_3_std
value: 6.2703
- type: nauc_map_at_3_diff1
value: 73.5279
- type: nauc_map_at_5_max
value: 49.805899999999994
- type: nauc_map_at_5_std
value: 10.1001
- type: nauc_map_at_5_diff1
value: 72.1812
- type: nauc_map_at_10_max
value: 51.9276
- type: nauc_map_at_10_std
value: 12.698200000000002
- type: nauc_map_at_10_diff1
value: 71.6343
- type: nauc_map_at_20_max
value: 51.8856
- type: nauc_map_at_20_std
value: 12.814800000000002
- type: nauc_map_at_20_diff1
value: 71.78179999999999
- type: nauc_map_at_100_max
value: 51.7504
- type: nauc_map_at_100_std
value: 12.5353
- type: nauc_map_at_100_diff1
value: 71.8854
- type: nauc_map_at_1000_max
value: 51.739900000000006
- type: nauc_map_at_1000_std
value: 12.519
- type: nauc_map_at_1000_diff1
value: 71.8964
- type: nauc_recall_at_1_max
value: 44.9654
- type: nauc_recall_at_1_std
value: 5.9821
- type: nauc_recall_at_1_diff1
value: 76.2581
- type: nauc_recall_at_3_max
value: 47.9306
- type: nauc_recall_at_3_std
value: 3.5374000000000003
- type: nauc_recall_at_3_diff1
value: 68.4552
- type: nauc_recall_at_5_max
value: 54.374
- type: nauc_recall_at_5_std
value: 17.646700000000003
- type: nauc_recall_at_5_diff1
value: 60.5644
- type: nauc_recall_at_10_max
value: 69.6484
- type: nauc_recall_at_10_std
value: 38.3671
- type: nauc_recall_at_10_diff1
value: 54.39580000000001
- type: nauc_recall_at_20_max
value: 70.0061
- type: nauc_recall_at_20_std
value: 42.403999999999996
- type: nauc_recall_at_20_diff1
value: 55.3831
- type: nauc_recall_at_100_max
value: 69.02629999999999
- type: nauc_recall_at_100_std
value: 43.850699999999996
- type: nauc_recall_at_100_diff1
value: 57.837
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 49.3608
- type: nauc_precision_at_1_std
value: 12.742400000000002
- type: nauc_precision_at_1_diff1
value: 74.5012
- type: nauc_precision_at_3_max
value: 45.2627
- type: nauc_precision_at_3_std
value: 15.5113
- type: nauc_precision_at_3_diff1
value: 44.5108
- type: nauc_precision_at_5_max
value: 48.4003
- type: nauc_precision_at_5_std
value: 35.3791
- type: nauc_precision_at_5_diff1
value: 19.7518
- type: nauc_precision_at_10_max
value: 46.688
- type: nauc_precision_at_10_std
value: 47.9876
- type: nauc_precision_at_10_diff1
value: 0.1083
- type: nauc_precision_at_20_max
value: 41.281400000000005
- type: nauc_precision_at_20_std
value: 49.0662
- type: nauc_precision_at_20_diff1
value: -6.2035
- type: nauc_precision_at_100_max
value: 30.0167
- type: nauc_precision_at_100_std
value: 47.2561
- type: nauc_precision_at_100_diff1
value: -22.8584
- type: nauc_precision_at_1000_max
value: 23.724999999999998
- type: nauc_precision_at_1000_std
value: 45.342
- type: nauc_precision_at_1000_diff1
value: -33.29
- type: nauc_mrr_at_1_max
value: 49.3608
- type: nauc_mrr_at_1_std
value: 12.742400000000002
- type: nauc_mrr_at_1_diff1
value: 74.5012
- type: nauc_mrr_at_3_max
value: 51.1718
- type: nauc_mrr_at_3_std
value: 11.739700000000001
- type: nauc_mrr_at_3_diff1
value: 71.5992
- type: nauc_mrr_at_5_max
value: 52.2421
- type: nauc_mrr_at_5_std
value: 14.127
- type: nauc_mrr_at_5_diff1
value: 70.57
- type: nauc_mrr_at_10_max
value: 52.5587
- type: nauc_mrr_at_10_std
value: 14.5207
- type: nauc_mrr_at_10_diff1
value: 70.55709999999999
- type: nauc_mrr_at_20_max
value: 52.3699
- type: nauc_mrr_at_20_std
value: 14.310300000000002
- type: nauc_mrr_at_20_diff1
value: 70.6993
- type: nauc_mrr_at_100_max
value: 52.2734
- type: nauc_mrr_at_100_std
value: 14.0848
- type: nauc_mrr_at_100_diff1
value: 70.8146
- type: nauc_mrr_at_1000_max
value: 52.2622
- type: nauc_mrr_at_1000_std
value: 14.0715
- type: nauc_mrr_at_1000_diff1
value: 70.8239
- type: main_score
value: 73.476
task:
type: Retrieval
- dataset:
config: default
name: MTEB SprintDuplicateQuestions (default)
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
split: test
type: mteb/sprintduplicatequestions-pairclassification
metrics:
- type: similarity_accuracy
value: 99.87819999999999
- type: similarity_accuracy_threshold
value: 74.8
- type: similarity_f1
value: 93.79729999999999
- type: similarity_f1_threshold
value: 74.6812
- type: similarity_precision
value: 94.6083
- type: similarity_recall
value: 93.0
- type: similarity_ap
value: 97.1971
- type: cosine_accuracy
value: 99.87819999999999
- type: cosine_accuracy_threshold
value: 74.8
- type: cosine_f1
value: 93.79729999999999
- type: cosine_f1_threshold
value: 74.6812
- type: cosine_precision
value: 94.6083
- type: cosine_recall
value: 93.0
- type: cosine_ap
value: 97.1971
- type: manhattan_accuracy
value: 99.8792
- type: manhattan_accuracy_threshold
value: 47567.8925
- type: manhattan_f1
value: 93.8508
- type: manhattan_f1_threshold
value: 47567.8925
- type: manhattan_precision
value: 94.6138
- type: manhattan_recall
value: 93.10000000000001
- type: manhattan_ap
value: 97.2177
- type: euclidean_accuracy
value: 99.8812
- type: euclidean_accuracy_threshold
value: 2164.0619
- type: euclidean_f1
value: 93.9759
- type: euclidean_f1_threshold
value: 2164.0619
- type: euclidean_precision
value: 94.35480000000001
- type: euclidean_recall
value: 93.60000000000001
- type: euclidean_ap
value: 97.2412
- type: dot_accuracy
value: 99.8446
- type: dot_accuracy_threshold
value: 68470.2454
- type: dot_f1
value: 91.9939
- type: dot_f1_threshold
value: 68470.2454
- type: dot_precision
value: 93.8606
- type: dot_recall
value: 90.2
- type: dot_ap
value: 96.36829999999999
- type: max_accuracy
value: 99.8812
- type: max_f1
value: 93.9759
- type: max_precision
value: 94.6138
- type: max_recall
value: 93.60000000000001
- type: max_ap
value: 97.2412
- type: main_score
value: 97.2412
task:
type: PairClassification
- dataset:
config: default
name: MTEB StackExchangeClustering (default)
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
split: test
type: mteb/stackexchange-clustering
metrics:
- type: v_measure
value: 70.04010000000001
- type: v_measure_std
value: 3.9558999999999997
- type: main_score
value: 70.04010000000001
task:
type: Clustering
- dataset:
config: default
name: MTEB StackExchangeClusteringP2P (default)
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
split: test
type: mteb/stackexchange-clustering-p2p
metrics:
- type: v_measure
value: 42.4207
- type: v_measure_std
value: 1.3677
- type: main_score
value: 42.4207
task:
type: Clustering
- dataset:
config: default
name: MTEB StackOverflowDupQuestions (default)
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
split: test
type: mteb/stackoverflowdupquestions-reranking
metrics:
- type: map
value: 52.7026
- type: mrr
value: 53.5668
- type: nAUC_map_max
value: 12.1758
- type: nAUC_map_std
value: 6.7148
- type: nAUC_map_diff1
value: 39.881499999999996
- type: nAUC_mrr_max
value: 13.0771
- type: nAUC_mrr_std
value: 7.7001
- type: nAUC_mrr_diff1
value: 39.6391
- type: main_score
value: 52.7026
task:
type: Reranking
- dataset:
config: default
name: MTEB SummEval (default)
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
split: test
type: mteb/summeval
metrics:
- type: pearson
value: 31.346400000000003
- type: spearman
value: 31.5967
- type: cosine_spearman
value: 31.5967
- type: cosine_pearson
value: 31.346400000000003
- type: dot_spearman
value: 28.5388
- type: dot_pearson
value: 31.005300000000002
- type: main_score
value: 31.5967
task:
type: Summarization
- dataset:
config: default
name: MTEB TRECCOVID (default)
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
split: test
type: mteb/trec-covid
metrics:
- type: ndcg_at_1
value: 87.0
- type: ndcg_at_3
value: 84.693
- type: ndcg_at_5
value: 82.211
- type: ndcg_at_10
value: 80.55
- type: ndcg_at_20
value: 77.766
- type: ndcg_at_100
value: 62.881
- type: ndcg_at_1000
value: 56.510000000000005
- type: map_at_1
value: 0.251
- type: map_at_3
value: 0.7000000000000001
- type: map_at_5
value: 1.124
- type: map_at_10
value: 2.114
- type: map_at_20
value: 3.837
- type: map_at_100
value: 12.903999999999998
- type: map_at_1000
value: 31.184
- type: recall_at_1
value: 0.251
- type: recall_at_3
value: 0.72
- type: recall_at_5
value: 1.179
- type: recall_at_10
value: 2.271
- type: recall_at_20
value: 4.242
- type: recall_at_100
value: 16.012999999999998
- type: recall_at_1000
value: 53.556000000000004
- type: precision_at_1
value: 92.0
- type: precision_at_3
value: 88.667
- type: precision_at_5
value: 86.8
- type: precision_at_10
value: 85.8
- type: precision_at_20
value: 82.39999999999999
- type: precision_at_100
value: 64.8
- type: precision_at_1000
value: 24.832
- type: mrr_at_1
value: 92.0
- type: mrr_at_3
value: 95.0
- type: mrr_at_5
value: 95.0
- type: mrr_at_10
value: 95.0
- type: mrr_at_20
value: 95.0
- type: mrr_at_100
value: 95.0
- type: mrr_at_1000
value: 95.0
- type: nauc_ndcg_at_1_max
value: 73.7596
- type: nauc_ndcg_at_1_std
value: 52.21130000000001
- type: nauc_ndcg_at_1_diff1
value: -8.4225
- type: nauc_ndcg_at_3_max
value: 68.513
- type: nauc_ndcg_at_3_std
value: 61.9698
- type: nauc_ndcg_at_3_diff1
value: -13.079099999999999
- type: nauc_ndcg_at_5_max
value: 60.7482
- type: nauc_ndcg_at_5_std
value: 66.56830000000001
- type: nauc_ndcg_at_5_diff1
value: -12.947500000000002
- type: nauc_ndcg_at_10_max
value: 57.4673
- type: nauc_ndcg_at_10_std
value: 65.25999999999999
- type: nauc_ndcg_at_10_diff1
value: -14.4235
- type: nauc_ndcg_at_20_max
value: 61.1214
- type: nauc_ndcg_at_20_std
value: 73.60640000000001
- type: nauc_ndcg_at_20_diff1
value: -18.1836
- type: nauc_ndcg_at_100_max
value: 55.3917
- type: nauc_ndcg_at_100_std
value: 80.9228
- type: nauc_ndcg_at_100_diff1
value: -13.6584
- type: nauc_ndcg_at_1000_max
value: 61.6035
- type: nauc_ndcg_at_1000_std
value: 77.73299999999999
- type: nauc_ndcg_at_1000_diff1
value: 9.456199999999999
- type: nauc_map_at_1_max
value: 3.0159
- type: nauc_map_at_1_std
value: -6.6826
- type: nauc_map_at_1_diff1
value: 19.3295
- type: nauc_map_at_3_max
value: 11.3326
- type: nauc_map_at_3_std
value: 0.2297
- type: nauc_map_at_3_diff1
value: 18.4889
- type: nauc_map_at_5_max
value: 12.8623
- type: nauc_map_at_5_std
value: 3.1086
- type: nauc_map_at_5_diff1
value: 15.2538
- type: nauc_map_at_10_max
value: 15.9145
- type: nauc_map_at_10_std
value: 5.8626
- type: nauc_map_at_10_diff1
value: 11.5455
- type: nauc_map_at_20_max
value: 24.6148
- type: nauc_map_at_20_std
value: 17.161199999999997
- type: nauc_map_at_20_diff1
value: 7.6256
- type: nauc_map_at_100_max
value: 42.070299999999996
- type: nauc_map_at_100_std
value: 48.926700000000004
- type: nauc_map_at_100_diff1
value: 0.16
- type: nauc_map_at_1000_max
value: 63.9887
- type: nauc_map_at_1000_std
value: 81.2657
- type: nauc_map_at_1000_diff1
value: 4.1088
- type: nauc_recall_at_1_max
value: 3.0159
- type: nauc_recall_at_1_std
value: -6.6826
- type: nauc_recall_at_1_diff1
value: 19.3295
- type: nauc_recall_at_3_max
value: 7.7778
- type: nauc_recall_at_3_std
value: -3.3724
- type: nauc_recall_at_3_diff1
value: 17.9181
- type: nauc_recall_at_5_max
value: 6.716900000000001
- type: nauc_recall_at_5_std
value: -2.6891000000000003
- type: nauc_recall_at_5_diff1
value: 16.3817
- type: nauc_recall_at_10_max
value: 7.7518
- type: nauc_recall_at_10_std
value: -1.9855
- type: nauc_recall_at_10_diff1
value: 13.4496
- type: nauc_recall_at_20_max
value: 14.4895
- type: nauc_recall_at_20_std
value: 7.2935
- type: nauc_recall_at_20_diff1
value: 11.2986
- type: nauc_recall_at_100_max
value: 29.8636
- type: nauc_recall_at_100_std
value: 33.5546
- type: nauc_recall_at_100_diff1
value: 7.0793
- type: nauc_recall_at_1000_max
value: 57.184000000000005
- type: nauc_recall_at_1000_std
value: 65.3208
- type: nauc_recall_at_1000_diff1
value: 15.7381
- type: nauc_precision_at_1_max
value: 93.4641
- type: nauc_precision_at_1_std
value: 80.6839
- type: nauc_precision_at_1_diff1
value: 21.592
- type: nauc_precision_at_3_max
value: 87.6596
- type: nauc_precision_at_3_std
value: 71.28370000000001
- type: nauc_precision_at_3_diff1
value: -0.5263
- type: nauc_precision_at_5_max
value: 69.3194
- type: nauc_precision_at_5_std
value: 67.4507
- type: nauc_precision_at_5_diff1
value: 5.8362
- type: nauc_precision_at_10_max
value: 62.393299999999996
- type: nauc_precision_at_10_std
value: 62.443599999999996
- type: nauc_precision_at_10_diff1
value: -5.3395
- type: nauc_precision_at_20_max
value: 63.4842
- type: nauc_precision_at_20_std
value: 68.95599999999999
- type: nauc_precision_at_20_diff1
value: -13.494100000000001
- type: nauc_precision_at_100_max
value: 59.24549999999999
- type: nauc_precision_at_100_std
value: 81.3779
- type: nauc_precision_at_100_diff1
value: -11.0792
- type: nauc_precision_at_1000_max
value: 44.8354
- type: nauc_precision_at_1000_std
value: 55.232099999999996
- type: nauc_precision_at_1000_diff1
value: -1.4931
- type: nauc_mrr_at_1_max
value: 93.4641
- type: nauc_mrr_at_1_std
value: 80.6839
- type: nauc_mrr_at_1_diff1
value: 21.592
- type: nauc_mrr_at_3_max
value: 93.8998
- type: nauc_mrr_at_3_std
value: 79.3962
- type: nauc_mrr_at_3_diff1
value: 19.3371
- type: nauc_mrr_at_5_max
value: 93.8998
- type: nauc_mrr_at_5_std
value: 79.3962
- type: nauc_mrr_at_5_diff1
value: 19.3371
- type: nauc_mrr_at_10_max
value: 93.8998
- type: nauc_mrr_at_10_std
value: 79.3962
- type: nauc_mrr_at_10_diff1
value: 19.3371
- type: nauc_mrr_at_20_max
value: 93.8998
- type: nauc_mrr_at_20_std
value: 79.3962
- type: nauc_mrr_at_20_diff1
value: 19.3371
- type: nauc_mrr_at_100_max
value: 93.8998
- type: nauc_mrr_at_100_std
value: 79.3962
- type: nauc_mrr_at_100_diff1
value: 19.3371
- type: nauc_mrr_at_1000_max
value: 93.8998
- type: nauc_mrr_at_1000_std
value: 79.3962
- type: nauc_mrr_at_1000_diff1
value: 19.3371
- type: main_score
value: 80.55
task:
type: Retrieval
- dataset:
config: default
name: MTEB Touche2020 (default)
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
split: test
type: mteb/touche2020
metrics:
- type: ndcg_at_1
value: 18.367
- type: ndcg_at_3
value: 23.105999999999998
- type: ndcg_at_5
value: 22.423000000000002
- type: ndcg_at_10
value: 21.83
- type: ndcg_at_20
value: 23.534
- type: ndcg_at_100
value: 33.332
- type: ndcg_at_1000
value: 44.842999999999996
- type: map_at_1
value: 1.52
- type: map_at_3
value: 3.811
- type: map_at_5
value: 5.4879999999999995
- type: map_at_10
value: 8.204
- type: map_at_20
value: 10.387
- type: map_at_100
value: 13.633000000000001
- type: map_at_1000
value: 15.156
- type: recall_at_1
value: 1.52
- type: recall_at_3
value: 5.466
- type: recall_at_5
value: 8.927
- type: recall_at_10
value: 15.237
- type: recall_at_20
value: 22.841
- type: recall_at_100
value: 44.586999999999996
- type: recall_at_1000
value: 79.199
- type: precision_at_1
value: 20.408
- type: precision_at_3
value: 25.169999999999998
- type: precision_at_5
value: 23.673
- type: precision_at_10
value: 20.408
- type: precision_at_20
value: 16.531000000000002
- type: precision_at_100
value: 7.204000000000001
- type: precision_at_1000
value: 1.473
- type: mrr_at_1
value: 20.4082
- type: mrr_at_3
value: 35.374100000000006
- type: mrr_at_5
value: 37.7211
- type: mrr_at_10
value: 39.7068
- type: mrr_at_20
value: 40.6272
- type: mrr_at_100
value: 40.7905
- type: mrr_at_1000
value: 40.805
- type: nauc_ndcg_at_1_max
value: -25.3799
- type: nauc_ndcg_at_1_std
value: -27.8526
- type: nauc_ndcg_at_1_diff1
value: 11.5616
- type: nauc_ndcg_at_3_max
value: -31.987900000000003
- type: nauc_ndcg_at_3_std
value: -18.1926
- type: nauc_ndcg_at_3_diff1
value: 15.4188
- type: nauc_ndcg_at_5_max
value: -29.2499
- type: nauc_ndcg_at_5_std
value: -18.8992
- type: nauc_ndcg_at_5_diff1
value: 9.677
- type: nauc_ndcg_at_10_max
value: -25.427899999999998
- type: nauc_ndcg_at_10_std
value: -19.0155
- type: nauc_ndcg_at_10_diff1
value: 1.5350000000000001
- type: nauc_ndcg_at_20_max
value: -25.007800000000003
- type: nauc_ndcg_at_20_std
value: -6.626899999999999
- type: nauc_ndcg_at_20_diff1
value: -2.0142
- type: nauc_ndcg_at_100_max
value: -24.7187
- type: nauc_ndcg_at_100_std
value: 18.587899999999998
- type: nauc_ndcg_at_100_diff1
value: -7.925599999999999
- type: nauc_ndcg_at_1000_max
value: -20.9609
- type: nauc_ndcg_at_1000_std
value: 27.360400000000002
- type: nauc_ndcg_at_1000_diff1
value: -5.3411
- type: nauc_map_at_1_max
value: -26.3166
- type: nauc_map_at_1_std
value: -27.701900000000002
- type: nauc_map_at_1_diff1
value: 14.4953
- type: nauc_map_at_3_max
value: -19.4984
- type: nauc_map_at_3_std
value: -26.0187
- type: nauc_map_at_3_diff1
value: 18.9316
- type: nauc_map_at_5_max
value: -17.6688
- type: nauc_map_at_5_std
value: -27.4662
- type: nauc_map_at_5_diff1
value: 16.3786
- type: nauc_map_at_10_max
value: -9.727
- type: nauc_map_at_10_std
value: -25.4592
- type: nauc_map_at_10_diff1
value: 8.434999999999999
- type: nauc_map_at_20_max
value: -14.2879
- type: nauc_map_at_20_std
value: -17.5881
- type: nauc_map_at_20_diff1
value: 2.4941
- type: nauc_map_at_100_max
value: -15.804499999999999
- type: nauc_map_at_100_std
value: -2.6222
- type: nauc_map_at_100_diff1
value: -4.3869
- type: nauc_map_at_1000_max
value: -15.4637
- type: nauc_map_at_1000_std
value: 1.8402000000000003
- type: nauc_map_at_1000_diff1
value: -5.3595
- type: nauc_recall_at_1_max
value: -26.3166
- type: nauc_recall_at_1_std
value: -27.701900000000002
- type: nauc_recall_at_1_diff1
value: 14.4953
- type: nauc_recall_at_3_max
value: -18.4525
- type: nauc_recall_at_3_std
value: -22.7019
- type: nauc_recall_at_3_diff1
value: 14.5105
- type: nauc_recall_at_5_max
value: -16.8608
- type: nauc_recall_at_5_std
value: -26.2799
- type: nauc_recall_at_5_diff1
value: 6.910299999999999
- type: nauc_recall_at_10_max
value: -11.498700000000001
- type: nauc_recall_at_10_std
value: -22.290499999999998
- type: nauc_recall_at_10_diff1
value: -1.6997000000000002
- type: nauc_recall_at_20_max
value: -16.319
- type: nauc_recall_at_20_std
value: -2.6968
- type: nauc_recall_at_20_diff1
value: -8.5511
- type: nauc_recall_at_100_max
value: -17.741
- type: nauc_recall_at_100_std
value: 36.1914
- type: nauc_recall_at_100_diff1
value: -20.1127
- type: nauc_recall_at_1000_max
value: 3.4278999999999997
- type: nauc_recall_at_1000_std
value: 65.7558
- type: nauc_recall_at_1000_diff1
value: -15.537899999999999
- type: nauc_precision_at_1_max
value: -27.3245
- type: nauc_precision_at_1_std
value: -28.615000000000002
- type: nauc_precision_at_1_diff1
value: 16.2275
- type: nauc_precision_at_3_max
value: -32.1286
- type: nauc_precision_at_3_std
value: -14.0653
- type: nauc_precision_at_3_diff1
value: 15.6075
- type: nauc_precision_at_5_max
value: -27.176299999999998
- type: nauc_precision_at_5_std
value: -15.5885
- type: nauc_precision_at_5_diff1
value: 7.3431999999999995
- type: nauc_precision_at_10_max
value: -26.9241
- type: nauc_precision_at_10_std
value: -11.737
- type: nauc_precision_at_10_diff1
value: -7.630000000000001
- type: nauc_precision_at_20_max
value: -26.901999999999997
- type: nauc_precision_at_20_std
value: 23.7519
- type: nauc_precision_at_20_diff1
value: -21.343799999999998
- type: nauc_precision_at_100_max
value: -16.9757
- type: nauc_precision_at_100_std
value: 70.6663
- type: nauc_precision_at_100_diff1
value: -32.3231
- type: nauc_precision_at_1000_max
value: 20.8431
- type: nauc_precision_at_1000_std
value: 37.8016
- type: nauc_precision_at_1000_diff1
value: -9.911200000000001
- type: nauc_mrr_at_1_max
value: -27.3245
- type: nauc_mrr_at_1_std
value: -28.615000000000002
- type: nauc_mrr_at_1_diff1
value: 16.2275
- type: nauc_mrr_at_3_max
value: -33.332499999999996
- type: nauc_mrr_at_3_std
value: -21.543499999999998
- type: nauc_mrr_at_3_diff1
value: 15.7577
- type: nauc_mrr_at_5_max
value: -34.56
- type: nauc_mrr_at_5_std
value: -21.0279
- type: nauc_mrr_at_5_diff1
value: 10.4699
- type: nauc_mrr_at_10_max
value: -35.4396
- type: nauc_mrr_at_10_std
value: -22.6385
- type: nauc_mrr_at_10_diff1
value: 8.4536
- type: nauc_mrr_at_20_max
value: -34.0343
- type: nauc_mrr_at_20_std
value: -21.4022
- type: nauc_mrr_at_20_diff1
value: 10.7134
- type: nauc_mrr_at_100_max
value: -34.190799999999996
- type: nauc_mrr_at_100_std
value: -21.5996
- type: nauc_mrr_at_100_diff1
value: 10.9828
- type: nauc_mrr_at_1000_max
value: -34.1503
- type: nauc_mrr_at_1000_std
value: -21.662300000000002
- type: nauc_mrr_at_1000_diff1
value: 10.96
- type: main_score
value: 21.83
task:
type: Retrieval
- dataset:
config: default
name: MTEB ToxicConversationsClassification (default)
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
split: test
type: mteb/toxic_conversations_50k
metrics:
- type: accuracy
value: 81.4014
- type: f1
value: 64.3103
- type: f1_weighted
value: 85.0047
- type: ap
value: 22.2804
- type: ap_weighted
value: 22.2804
- type: main_score
value: 81.4014
task:
type: Classification
- dataset:
config: default
name: MTEB TweetSentimentExtractionClassification (default)
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
split: test
type: mteb/tweet_sentiment_extraction
metrics:
- type: accuracy
value: 66.4403
- type: f1
value: 66.8774
- type: f1_weighted
value: 65.9999
- type: main_score
value: 66.4403
task:
type: Classification
- dataset:
config: default
name: MTEB TwentyNewsgroupsClustering (default)
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
split: test
type: mteb/twentynewsgroups-clustering
metrics:
- type: v_measure
value: 53.3153
- type: v_measure_std
value: 1.2923
- type: main_score
value: 53.3153
task:
type: Clustering
- dataset:
config: default
name: MTEB TwitterSemEval2015 (default)
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
split: test
type: mteb/twittersemeval2015-pairclassification
metrics:
- type: similarity_accuracy
value: 85.22380000000001
- type: similarity_accuracy_threshold
value: 74.7432
- type: similarity_f1
value: 66.2828
- type: similarity_f1_threshold
value: 69.9472
- type: similarity_precision
value: 60.765299999999996
- type: similarity_recall
value: 72.9024
- type: similarity_ap
value: 72.0492
- type: cosine_accuracy
value: 85.22380000000001
- type: cosine_accuracy_threshold
value: 74.7432
- type: cosine_f1
value: 66.2828
- type: cosine_f1_threshold
value: 69.9472
- type: cosine_precision
value: 60.765299999999996
- type: cosine_recall
value: 72.9024
- type: cosine_ap
value: 72.0492
- type: manhattan_accuracy
value: 85.10459999999999
- type: manhattan_accuracy_threshold
value: 48810.3699
- type: manhattan_f1
value: 65.7133
- type: manhattan_f1_threshold
value: 53724.462900000006
- type: manhattan_precision
value: 60.3399
- type: manhattan_recall
value: 72.1372
- type: manhattan_ap
value: 71.3681
- type: euclidean_accuracy
value: 85.1404
- type: euclidean_accuracy_threshold
value: 2203.8609
- type: euclidean_f1
value: 65.8107
- type: euclidean_f1_threshold
value: 2445.96
- type: euclidean_precision
value: 59.8875
- type: euclidean_recall
value: 73.0343
- type: euclidean_ap
value: 71.3938
- type: dot_accuracy
value: 84.8781
- type: dot_accuracy_threshold
value: 74077.38040000001
- type: dot_f1
value: 65.3706
- type: dot_f1_threshold
value: 69501.5808
- type: dot_precision
value: 60.58559999999999
- type: dot_recall
value: 70.97630000000001
- type: dot_ap
value: 71.0091
- type: max_accuracy
value: 85.22380000000001
- type: max_f1
value: 66.2828
- type: max_precision
value: 60.765299999999996
- type: max_recall
value: 73.0343
- type: max_ap
value: 72.0492
- type: main_score
value: 72.0492
task:
type: PairClassification
- dataset:
config: default
name: MTEB TwitterURLCorpus (default)
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
split: test
type: mteb/twitterurlcorpus-pairclassification
metrics:
- type: similarity_accuracy
value: 89.145
- type: similarity_accuracy_threshold
value: 65.00280000000001
- type: similarity_f1
value: 78.78150000000001
- type: similarity_f1_threshold
value: 61.2185
- type: similarity_precision
value: 75.0279
- type: similarity_recall
value: 82.9304
- type: similarity_ap
value: 86.39949999999999
- type: cosine_accuracy
value: 89.145
- type: cosine_accuracy_threshold
value: 65.00280000000001
- type: cosine_f1
value: 78.78150000000001
- type: cosine_f1_threshold
value: 61.2185
- type: cosine_precision
value: 75.0279
- type: cosine_recall
value: 82.9304
- type: cosine_ap
value: 86.39949999999999
- type: manhattan_accuracy
value: 89.05579999999999
- type: manhattan_accuracy_threshold
value: 55381.189
- type: manhattan_f1
value: 78.6152
- type: manhattan_f1_threshold
value: 58447.6685
- type: manhattan_precision
value: 74.77080000000001
- type: manhattan_recall
value: 82.8765
- type: manhattan_ap
value: 86.2899
- type: euclidean_accuracy
value: 89.1179
- type: euclidean_accuracy_threshold
value: 2552.2853999999998
- type: euclidean_f1
value: 78.6816
- type: euclidean_f1_threshold
value: 2660.0677
- type: euclidean_precision
value: 74.4317
- type: euclidean_recall
value: 83.4463
- type: euclidean_ap
value: 86.3158
- type: dot_accuracy
value: 88.81710000000001
- type: dot_accuracy_threshold
value: 58383.1421
- type: dot_f1
value: 78.2367
- type: dot_f1_threshold
value: 54826.550299999995
- type: dot_precision
value: 73.7657
- type: dot_recall
value: 83.2846
- type: dot_ap
value: 85.5699
- type: max_accuracy
value: 89.145
- type: max_f1
value: 78.78150000000001
- type: max_precision
value: 75.0279
- type: max_recall
value: 83.4463
- type: max_ap
value: 86.39949999999999
- type: main_score
value: 86.39949999999999
task:
type: PairClassification
---
# cde-small-v2
> [!NOTE]
> **Note on parameter count:** Although HuggingFace reports the size of this model as 281M params, really it can be thought of as 140M. That's because our weights actually contain the weights of two models (dubbed "first stage" and "second stage"), and only the second-stage model is used to compute embeddings at search time.
<a href="https://github.com/jxmorris12/cde">Github</a>
Our new model that naturally integrates "context tokens" into the embedding process. As of January 13th, 2025, `cde-small-v2` is the best small model (under 400M params) on the [MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard) for text embedding models, with an average score of 65.58.
👉 <b><a href="https://colab.research.google.com/drive/1r8xwbp7_ySL9lP-ve4XMJAHjidB9UkbL?usp=sharing">Try on Colab</a></b>
<br>
👉 <b><a href="https://arxiv.org/abs/2410.02525">Contextual Document Embeddings (ArXiv)</a></b>

<br>
<hr>
# How to use `cde-small-v2`
Our embedding model needs to be used in *two stages*. The first stage is to gather some dataset information by embedding a subset of the corpus using our "first-stage" model. The second stage is to actually embed queries and documents, conditioning on the corpus information from the first stage. Note that we can do the first stage part offline and only use the second-stage weights at inference time.
</details>
## With Transformers
<details>
<summary>Click to learn how to use cde-small-v2 with Transformers</summary>
### Loading the model
Our model can be loaded using `transformers` out-of-the-box with "trust remote code" enabled. We use the default BERT uncased tokenizer:
```python
import transformers
model = transformers.AutoModel.from_pretrained("jxm/cde-small-v2", trust_remote_code=True)
tokenizer = transformers.AutoTokenizer.from_pretrained("answerdotai/ModernBERT-base")
```
#### Note on prefixes
*Nota bene*: Like all state-of-the-art embedding models, our model was trained with task-specific prefixes. To do retrieval, you can prepend the following strings to queries & documents:
```python
query_prefix = "search_query: "
document_prefix = "search_document: "
```
### First stage
```python
minicorpus_size = model.config.transductive_corpus_size
minicorpus_docs = [ ... ] # Put some strings here that are representative of your corpus, for example by calling random.sample(corpus, k=minicorpus_size)
assert len(minicorpus_docs) == minicorpus_size # You must use exactly this many documents in the minicorpus. You can oversample if your corpus is smaller.
minicorpus_docs = tokenizer(
[document_prefix + doc for doc in minicorpus_docs],
truncation=True,
padding=True,
max_length=512,
return_tensors="pt"
).to(model.device)
import torch
from tqdm.autonotebook import tqdm
batch_size = 32
dataset_embeddings = []
for i in tqdm(range(0, len(minicorpus_docs["input_ids"]), batch_size)):
minicorpus_docs_batch = {k: v[i:i+batch_size] for k,v in minicorpus_docs.items()}
with torch.no_grad():
dataset_embeddings.append(
model.first_stage_model(**minicorpus_docs_batch)
)
dataset_embeddings = torch.cat(dataset_embeddings)
```
### Running the second stage
Now that we have obtained "dataset embeddings" we can embed documents and queries like normal. Remember to use the document prefix for documents:
```python
docs = tokenizer(
[document_prefix + doc for doc in docs],
truncation=True,
padding=True,
max_length=512,
return_tensors="pt"
).to(model.device)
with torch.no_grad():
doc_embeddings = model.second_stage_model(
input_ids=docs["input_ids"],
attention_mask=docs["attention_mask"],
dataset_embeddings=dataset_embeddings,
)
doc_embeddings /= doc_embeddings.norm(p=2, dim=1, keepdim=True)
```
and the query prefix for queries:
```python
queries = queries.select(range(16))["text"]
queries = tokenizer(
[query_prefix + query for query in queries],
truncation=True,
padding=True,
max_length=512,
return_tensors="pt"
).to(model.device)
with torch.no_grad():
query_embeddings = model.second_stage_model(
input_ids=queries["input_ids"],
attention_mask=queries["attention_mask"],
dataset_embeddings=dataset_embeddings,
)
query_embeddings /= query_embeddings.norm(p=2, dim=1, keepdim=True)
```
these embeddings can be compared using dot product, since they're normalized.
</details>
### What if I don't know what my corpus will be ahead of time?
If you can't obtain corpus information ahead of time, you still have to pass *something* as the dataset embeddings; our model will work fine in this case, but not quite as well; without corpus information, our model performance drops from 65.0 to 63.8 on MTEB. We provide [some random strings](https://huggingface.co/jxm/cde-small-v2/resolve/main/random_strings.txt) that worked well for us that can be used as a substitute for corpus sampling.
## With Sentence Transformers
<details open="">
<summary>Click to learn how to use cde-small-v2 with Sentence Transformers</summary>
### Loading the model
Our model can be loaded using `sentence-transformers` out-of-the-box with "trust remote code" enabled:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("jxm/cde-small-v2", trust_remote_code=True)
```
#### Note on prefixes
*Nota bene*: Like all state-of-the-art embedding models, our model was trained with task-specific prefixes. To do retrieval, you can use `prompt_name="query"` and `prompt_name="document"` in the `encode` method of the model when embedding queries and documents, respectively.
### First stage
```python
minicorpus_size = model[0].config.transductive_corpus_size
minicorpus_docs = [ ... ] # Put some strings here that are representative of your corpus, for example by calling random.sample(corpus, k=minicorpus_size)
assert len(minicorpus_docs) == minicorpus_size # You must use exactly this many documents in the minicorpus. You can oversample if your corpus is smaller.
dataset_embeddings = model.encode(
minicorpus_docs,
prompt_name="document",
convert_to_tensor=True
)
```
### Running the second stage
Now that we have obtained "dataset embeddings" we can embed documents and queries like normal. Remember to use the document prompt for documents:
```python
docs = [...]
queries = [...]
doc_embeddings = model.encode(
docs,
prompt_name="document",
dataset_embeddings=dataset_embeddings,
convert_to_tensor=True,
)
query_embeddings = model.encode(
queries,
prompt_name="query",
dataset_embeddings=dataset_embeddings,
convert_to_tensor=True,
)
```
these embeddings can be compared using cosine similarity via `model.similarity`:
```python
similarities = model.similarity(query_embeddings, doc_embeddings)
topk_values, topk_indices = similarities.topk(5)
```
<details>
<summary>Click here for a full copy-paste ready example</summary>
```python
from sentence_transformers import SentenceTransformer
from datasets import load_dataset
# 1. Load the Sentence Transformer model
model = SentenceTransformer("jxm/cde-small-v2", trust_remote_code=True)
context_docs_size = model[0].config.transductive_corpus_size # 512
# 2. Load the dataset: context dataset, docs, and queries
dataset = load_dataset("sentence-transformers/natural-questions", split="train")
dataset.shuffle(seed=42)
# 10 queries, 512 context docs, 500 docs
queries = dataset["query"][:10]
docs = dataset["answer"][:2000]
context_docs = dataset["answer"][-context_docs_size:] # Last 512 docs
# 3. First stage: embed the context docs
dataset_embeddings = model.encode(
context_docs,
prompt_name="document",
convert_to_tensor=True,
)
# 4. Second stage: embed the docs and queries
doc_embeddings = model.encode(
docs,
prompt_name="document",
dataset_embeddings=dataset_embeddings,
convert_to_tensor=True,
)
query_embeddings = model.encode(
queries,
prompt_name="query",
dataset_embeddings=dataset_embeddings,
convert_to_tensor=True,
)
# 5. Compute the similarity between the queries and docs
similarities = model.similarity(query_embeddings, doc_embeddings)
topk_values, topk_indices = similarities.topk(5)
print(topk_values)
print(topk_indices)
"""
tensor([[0.5495, 0.5426, 0.5423, 0.5292, 0.5286],
[0.6357, 0.6334, 0.6177, 0.5862, 0.5794],
[0.7648, 0.5452, 0.5000, 0.4959, 0.4881],
[0.6802, 0.5225, 0.5178, 0.5160, 0.5075],
[0.6947, 0.5843, 0.5619, 0.5344, 0.5298],
[0.7742, 0.7742, 0.7742, 0.7231, 0.6224],
[0.8853, 0.6667, 0.5829, 0.5795, 0.5769],
[0.6911, 0.6127, 0.6003, 0.5986, 0.5936],
[0.6796, 0.6053, 0.6000, 0.5911, 0.5884],
[0.7624, 0.5589, 0.5428, 0.5278, 0.5275]], device='cuda:0')
tensor([[ 0, 296, 234, 1651, 1184],
[1542, 466, 438, 1207, 1911],
[ 2, 1562, 632, 1852, 382],
[ 3, 694, 932, 1765, 662],
[ 4, 35, 747, 26, 432],
[ 534, 175, 5, 1495, 575],
[ 6, 1802, 1875, 747, 21],
[ 7, 1913, 1936, 640, 6],
[ 8, 747, 167, 1318, 1743],
[ 9, 1583, 1145, 219, 357]], device='cuda:0')
"""
# As you can see, almost every query_i has document_i as the most similar document.
# 6. Print the top-k results
for query_idx, top_doc_idx in enumerate(topk_indices[:, 0]):
print(f"Query {query_idx}: {queries[query_idx]}")
print(f"Top Document: {docs[top_doc_idx]}")
print()
"""
Query 0: when did richmond last play in a preliminary final
Top Document: Richmond Football Club Richmond began 2017 with 5 straight wins, a feat it had not achieved since 1995. A series of close losses hampered the Tigers throughout the middle of the season, including a 5-point loss to the Western Bulldogs, 2-point loss to Fremantle, and a 3-point loss to the Giants. Richmond ended the season strongly with convincing victories over Fremantle and St Kilda in the final two rounds, elevating the club to 3rd on the ladder. Richmond's first final of the season against the Cats at the MCG attracted a record qualifying final crowd of 95,028; the Tigers won by 51 points. Having advanced to the first preliminary finals for the first time since 2001, Richmond defeated Greater Western Sydney by 36 points in front of a crowd of 94,258 to progress to the Grand Final against Adelaide, their first Grand Final appearance since 1982. The attendance was 100,021, the largest crowd to a grand final since 1986. The Crows led at quarter time and led by as many as 13, but the Tigers took over the game as it progressed and scored seven straight goals at one point. They eventually would win by 48 points – 16.12 (108) to Adelaide's 8.12 (60) – to end their 37-year flag drought.[22] Dustin Martin also became the first player to win a Premiership medal, the Brownlow Medal and the Norm Smith Medal in the same season, while Damien Hardwick was named AFL Coaches Association Coach of the Year. Richmond's jump from 13th to premiers also marked the biggest jump from one AFL season to the next.
Query 1: who sang what in the world's come over you
Top Document: Life's What You Make It (Talk Talk song) "Life's What You Make It" is a song by the English band Talk Talk. It was released as a single in 1986, the first from the band's album The Colour of Spring. The single was a hit in the UK, peaking at No. 16, and charted in numerous other countries, often reaching the Top 20.
Query 2: who produces the most wool in the world
Top Document: Wool Global wool production is about 2 million tonnes per year, of which 60% goes into apparel. Wool comprises ca 3% of the global textile market, but its value is higher owing to dying and other modifications of the material.[1] Australia is a leading producer of wool which is mostly from Merino sheep but has been eclipsed by China in terms of total weight.[30] New Zealand (2016) is the third-largest producer of wool, and the largest producer of crossbred wool. Breeds such as Lincoln, Romney, Drysdale, and Elliotdale produce coarser fibers, and wool from these sheep is usually used for making carpets.
Query 3: where does alaska the last frontier take place
Top Document: Alaska: The Last Frontier Alaska: The Last Frontier is an American reality cable television series on the Discovery Channel, currently in its 7th season of broadcast. The show documents the extended Kilcher family, descendants of Swiss immigrants and Alaskan pioneers, Yule and Ruth Kilcher, at their homestead 11 miles outside of Homer.[1] By living without plumbing or modern heating, the clan chooses to subsist by farming, hunting and preparing for the long winters.[2] The Kilcher family are relatives of the singer Jewel,[1][3] who has appeared on the show.[4]
Query 4: a day to remember all i want cameos
Top Document: All I Want (A Day to Remember song) The music video for the song, which was filmed in October 2010,[4] was released on January 6, 2011.[5] It features cameos of numerous popular bands and musicians. The cameos are: Tom Denney (A Day to Remember's former guitarist), Pete Wentz, Winston McCall of Parkway Drive, The Devil Wears Prada, Bring Me the Horizon, Sam Carter of Architects, Tim Lambesis of As I Lay Dying, Silverstein, Andrew WK, August Burns Red, Seventh Star, Matt Heafy of Trivium, Vic Fuentes of Pierce the Veil, Mike Herrera of MxPx, and Set Your Goals.[5] Rock Sound called the video "quite excellent".[5]
Query 5: what does the red stripes mean on the american flag
Top Document: Flag of the United States The flag of the United States of America, often referred to as the American flag, is the national flag of the United States. It consists of thirteen equal horizontal stripes of red (top and bottom) alternating with white, with a blue rectangle in the canton (referred to specifically as the "union") bearing fifty small, white, five-pointed stars arranged in nine offset horizontal rows, where rows of six stars (top and bottom) alternate with rows of five stars. The 50 stars on the flag represent the 50 states of the United States of America, and the 13 stripes represent the thirteen British colonies that declared independence from the Kingdom of Great Britain, and became the first states in the U.S.[1] Nicknames for the flag include The Stars and Stripes,[2] Old Glory,[3] and The Star-Spangled Banner.
Query 6: where did they film diary of a wimpy kid
Top Document: Diary of a Wimpy Kid (film) Filming of Diary of a Wimpy Kid was in Vancouver and wrapped up on October 16, 2009.
Query 7: where was beasts of the southern wild filmed
Top Document: Beasts of the Southern Wild The film's fictional setting, "Isle de Charles Doucet", known to its residents as the Bathtub, was inspired by several isolated and independent fishing communities threatened by erosion, hurricanes and rising sea levels in Louisiana's Terrebonne Parish, most notably the rapidly eroding Isle de Jean Charles. It was filmed in Terrebonne Parish town Montegut.[5]
Query 8: what part of the country are you likely to find the majority of the mollisols
Top Document: Mollisol Mollisols occur in savannahs and mountain valleys (such as Central Asia, or the North American Great Plains). These environments have historically been strongly influenced by fire and abundant pedoturbation from organisms such as ants and earthworms. It was estimated that in 2003, only 14 to 26 percent of grassland ecosystems still remained in a relatively natural state (that is, they were not used for agriculture due to the fertility of the A horizon). Globally, they represent ~7% of ice-free land area. As the world's most agriculturally productive soil order, the Mollisols represent one of the more economically important soil orders.
Query 9: when did fosters home for imaginary friends start
Top Document: Foster's Home for Imaginary Friends McCracken conceived the series after adopting two dogs from an animal shelter and applying the concept to imaginary friends. The show first premiered on Cartoon Network on August 13, 2004, as a 90-minute television film. On August 20, it began its normal run of twenty-to-thirty-minute episodes on Fridays, at 7 pm. The series finished its run on May 3, 2009, with a total of six seasons and seventy-nine episodes. McCracken left Cartoon Network shortly after the series ended. Reruns have aired on Boomerang from August 11, 2012 to November 3, 2013 and again from June 1, 2014 to April 3, 2017.
"""
```
</details>
### Colab demo
We've set up a short demo in a Colab notebook showing how you might use our model:
[Try our model in Colab:](https://colab.research.google.com/drive/1ddWeNj9nztHrwtoSEtaArfs7_NZhZA6k?usp=sharing)
### Training details
All non-mentioned other hyperparameters (learning, etc.) are either in the config or CDE paper. If not, please raise an issue here: https://github.com/jxmorris12/cde
#### Model details
cde-small-v2 includes a number of modeling changes from cde-small-v1:
- used the recently-released [ModernBERT](https://huggingface.co/blog/modernbert)
- added a residual connection between the model stages, which helps conditioning and gradient flow
- disabled pooling over instruction tokens
- disable position-embedding nullification over contextual tokens
- disable weight decay (not sure if this one helped or not)
#### Unsupervised training
Trained for six epochs on the nomic-unsupervised dataset with cluster size of 512 and batch size of 512, using GTR clusters and GTE-large filtering. (Probably would have performed better with GTE clustering too, but that's an expensive operation that we didn't rerun.)
#### Supervised training
Trained for four epochs on the BGE dataset with GTE clusters and GTE hard-negative filtering.
### Cite us
Used our model, method, or architecture? Want to cite us? Here's the ArXiv citation information:
```
@misc{morris2024contextualdocumentembeddings,
title={Contextual Document Embeddings},
author={John X. Morris and Alexander M. Rush},
year={2024},
eprint={2410.02525},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.02525},
}
``` |
sid/ppo-Huggy | sid | "2023-06-19T22:53:24Z" | 15 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2023-06-19T22:52:44Z" | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: sid/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CyberHarem/yokoyama_nao_theidolmstermillionlive | CyberHarem | "2023-09-23T20:41:44Z" | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/yokoyama_nao_theidolmstermillionlive",
"license:mit",
"region:us"
] | text-to-image | "2023-09-23T20:28:46Z" | ---
license: mit
datasets:
- CyberHarem/yokoyama_nao_theidolmstermillionlive
pipeline_tag: text-to-image
tags:
- art
---
# Lora of yokoyama_nao_theidolmstermillionlive
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 3500, you need to download `3500/yokoyama_nao_theidolmstermillionlive.pt` as the embedding and `3500/yokoyama_nao_theidolmstermillionlive.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 3500**, with the score of 0.985. The trigger words are:
1. `yokoyama_nao_theidolmstermillionlive`
2. `brown_hair, ahoge, purple_eyes, side_ponytail, bangs, drill_hair, smile, side_drill, medium_hair, sidelocks, blush, hair_ornament, open_mouth, breasts`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:--------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7500 | 0.980 | [Download](7500/yokoyama_nao_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7500/previews/nude.png) | [<NSFW, click to see>](7500/previews/nude2.png) |  |  |
| 7000 | 0.982 | [Download](7000/yokoyama_nao_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7000/previews/nude.png) | [<NSFW, click to see>](7000/previews/nude2.png) |  |  |
| 6500 | 0.981 | [Download](6500/yokoyama_nao_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6500/previews/nude.png) | [<NSFW, click to see>](6500/previews/nude2.png) |  |  |
| 6000 | 0.982 | [Download](6000/yokoyama_nao_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6000/previews/nude.png) | [<NSFW, click to see>](6000/previews/nude2.png) |  |  |
| 5500 | 0.973 | [Download](5500/yokoyama_nao_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5500/previews/nude.png) | [<NSFW, click to see>](5500/previews/nude2.png) |  |  |
| 5000 | 0.982 | [Download](5000/yokoyama_nao_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5000/previews/nude.png) | [<NSFW, click to see>](5000/previews/nude2.png) |  |  |
| 4500 | 0.982 | [Download](4500/yokoyama_nao_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4500/previews/nude.png) | [<NSFW, click to see>](4500/previews/nude2.png) |  |  |
| 4000 | 0.980 | [Download](4000/yokoyama_nao_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4000/previews/nude.png) | [<NSFW, click to see>](4000/previews/nude2.png) |  |  |
| **3500** | **0.985** | [**Download**](3500/yokoyama_nao_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3500/previews/nude.png) | [<NSFW, click to see>](3500/previews/nude2.png) |  |  |
| 3000 | 0.981 | [Download](3000/yokoyama_nao_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3000/previews/nude.png) | [<NSFW, click to see>](3000/previews/nude2.png) |  |  |
| 2500 | 0.981 | [Download](2500/yokoyama_nao_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2500/previews/nude.png) | [<NSFW, click to see>](2500/previews/nude2.png) |  |  |
| 2000 | 0.972 | [Download](2000/yokoyama_nao_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2000/previews/nude.png) | [<NSFW, click to see>](2000/previews/nude2.png) |  |  |
| 1500 | 0.959 | [Download](1500/yokoyama_nao_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [<NSFW, click to see>](1500/previews/nude2.png) |  |  |
| 1000 | 0.968 | [Download](1000/yokoyama_nao_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [<NSFW, click to see>](1000/previews/nude2.png) |  |  |
| 500 | 0.946 | [Download](500/yokoyama_nao_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](500/previews/nude.png) | [<NSFW, click to see>](500/previews/nude2.png) |  |  |
|
gaokaobishuati/a2c-PandaReachDense-v2 | gaokaobishuati | "2023-05-08T05:15:03Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-01-25T13:59:45Z" | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.61 +/- 0.19
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
IPPATAPUVENKATASRICHANDRA/whishper | IPPATAPUVENKATASRICHANDRA | "2025-03-14T11:36:09Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-03-14T09:10:38Z" | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: whishper
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: ta
split: test
args: ta
metrics:
- name: Wer
type: wer
value: 72.24880382775119
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whishper
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5474
- Wer: 72.2488
- Cer: 29.9605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 0.5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|
| 0.2442 | 0.0333 | 5 | 0.8071 | 140.3509 | 157.0811 |
| 0.2386 | 0.0667 | 10 | 0.7964 | 146.2520 | 136.7877 |
| 0.3848 | 0.1 | 15 | 0.7687 | 146.8900 | 111.5479 |
| 0.3015 | 0.1333 | 20 | 0.7213 | 157.0973 | 126.8761 |
| 0.2178 | 0.1667 | 25 | 0.6916 | 159.1707 | 144.8561 |
| 0.2314 | 0.2 | 30 | 0.6551 | 149.6013 | 125.3526 |
| 0.2112 | 0.2333 | 35 | 0.6239 | 99.3620 | 64.2844 |
| 0.1571 | 0.2667 | 40 | 0.5794 | 76.5550 | 35.1514 |
| 0.1934 | 0.3 | 45 | 0.5547 | 73.0463 | 33.7596 |
| 0.3231 | 0.3333 | 50 | 0.5474 | 72.2488 | 29.9605 |
| 0.1035 | 0.3667 | 55 | 0.5434 | 72.5678 | 32.3491 |
| 0.1991 | 0.4 | 60 | 0.5454 | 74.0032 | 31.4275 |
| 0.196 | 0.4333 | 65 | 0.5495 | 73.5247 | 36.0166 |
| 0.4541 | 0.4667 | 70 | 0.5448 | 73.3652 | 38.5556 |
| 0.2166 | 0.5 | 75 | 0.5418 | 73.3652 | 39.3455 |
### Framework versions
- Transformers 4.50.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
Shardev/finetuned_demo_2 | Shardev | "2024-05-07T11:14:52Z" | 108 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-07T11:14:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SanctumAI/Mistral-7B-Instruct-v0.3-GGUF | SanctumAI | "2024-09-15T11:33:21Z" | 77,343 | 9 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:quantized:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-05-23T13:28:04Z" | ---
pipeline_tag: text-generation
license: apache-2.0
base_model:
- mistralai/Mistral-7B-Instruct-v0.3
---

*This model was quantized by [SanctumAI](https://sanctum.ai). To leave feedback, join our community in [Discord](https://discord.gg/7ZNE78HJKh).*
# Mistral 7B Instruct v0.3 GGUF
**Model creator:** [mistralai](https://huggingface.co/mistralai)<br>
**Original model**: [Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)<br>
## Model Summary:
The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3.
Mistral-7B-v0.3 has the following changes compared to [Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/edit/main/README.md)
- Extended vocabulary to 32768
- Supports v3 Tokenizer
- Supports function calling
## Prompt Template:
If you're using Sanctum app, simply use `Mistral` model preset.
Prompt template:
```
<s>[INST] {prompt} [/INST]
```
## Hardware Requirements Estimate
| Name | Quant method | Size | Memory (RAM, vRAM) required (for full context of 32k tokens) |
| ---- | ---- | ---- | ---- |
| [mistral-7b-instruct-v0.3.Q2_K.gguf](https://huggingface.co/SanctumAI/Mistral-7B-Instruct-v0.3-GGUF/blob/main/mistral-7b-instruct-v0.3.Q2_K.gguf) | Q2_K | 2.72 GB | 6.78 GB |
| [mistral-7b-instruct-v0.3.Q3_K_S.gguf](https://huggingface.co/SanctumAI/Mistral-7B-Instruct-v0.3-GGUF/blob/main/mistral-7b-instruct-v0.3.Q3_K_S.gguf) | Q3_K_S | 3.17 GB | 7.19 GB |
| [mistral-7b-instruct-v0.3.Q3_K_M.gguf](https://huggingface.co/SanctumAI/Mistral-7B-Instruct-v0.3-GGUF/blob/main/mistral-7b-instruct-v0.3.Q3_K_M.gguf) | Q3_K_M | 3.52 GB | 7.52 GB |
| [mistral-7b-instruct-v0.3.Q3_K_L.gguf](https://huggingface.co/SanctumAI/Mistral-7B-Instruct-v0.3-GGUF/blob/main/mistral-7b-instruct-v0.3.Q3_K_L.gguf) | Q3_K_L | 3.83 GB | 7.80 GB |
| [mistral-7b-instruct-v0.3.Q4_0.gguf](https://huggingface.co/SanctumAI/Mistral-7B-Instruct-v0.3-GGUF/blob/main/mistral-7b-instruct-v0.3.Q4_0.gguf) | Q4_0 | 4.11 GB | 8.07 GB |
| [mistral-7b-instruct-v0.3.Q4_K_S.gguf](https://huggingface.co/SanctumAI/Mistral-7B-Instruct-v0.3-GGUF/blob/main/mistral-7b-instruct-v0.3.Q4_K_S.gguf) | Q4_K_S | 4.14 GB | 8.10 GB |
| [mistral-7b-instruct-v0.3.Q4_K_M.gguf](https://huggingface.co/SanctumAI/Mistral-7B-Instruct-v0.3-GGUF/blob/main/mistral-7b-instruct-v0.3.Q4_K_M.gguf) | Q4_K_M | 4.37 GB | 8.31 GB |
| [mistral-7b-instruct-v0.3.Q4_K.gguf](https://huggingface.co/SanctumAI/Mistral-7B-Instruct-v0.3-GGUF/blob/main/mistral-7b-instruct-v0.3.Q4_K.gguf) | Q4_K | 4.37 GB | 8.31 GB |
| [mistral-7b-instruct-v0.3.Q4_1.gguf](https://huggingface.co/SanctumAI/Mistral-7B-Instruct-v0.3-GGUF/blob/main/mistral-7b-instruct-v0.3.Q4_1.gguf) | Q4_1 | 4.56 GB | 8.48 GB |
| [mistral-7b-instruct-v0.3.Q5_0.gguf](https://huggingface.co/SanctumAI/Mistral-7B-Instruct-v0.3-GGUF/blob/main/mistral-7b-instruct-v0.3.Q5_0.gguf) | Q5_0 | 5.00 GB | 8.90 GB |
| [mistral-7b-instruct-v0.3.Q5_K_S.gguf](https://huggingface.co/SanctumAI/Mistral-7B-Instruct-v0.3-GGUF/blob/main/mistral-7b-instruct-v0.3.Q5_K_S.gguf) | Q5_K_S | 5.00 GB | 8.90 GB |
| [mistral-7b-instruct-v0.3.Q5_K_M.gguf](https://huggingface.co/SanctumAI/Mistral-7B-Instruct-v0.3-GGUF/blob/main/mistral-7b-instruct-v0.3.Q5_K_M.gguf) | Q5_K_M | 5.14 GB | 9.02 GB |
| [mistral-7b-instruct-v0.3.Q5_K.gguf](https://huggingface.co/SanctumAI/Mistral-7B-Instruct-v0.3-GGUF/blob/main/mistral-7b-instruct-v0.3.Q5_K.gguf) | Q5_K | 5.14 GB | 9.02 GB |
| [mistral-7b-instruct-v0.3.Q5_1.gguf](https://huggingface.co/SanctumAI/Mistral-7B-Instruct-v0.3-GGUF/blob/main/mistral-7b-instruct-v0.3.Q5_1.gguf) | Q5_1 | 5.45 GB | 9.31 GB |
| [mistral-7b-instruct-v0.3.Q6_K.gguf](https://huggingface.co/SanctumAI/Mistral-7B-Instruct-v0.3-GGUF/blob/main/mistral-7b-instruct-v0.3.Q6_K.gguf) | Q6_K | 5.95 GB | 9.78 GB |
| [mistral-7b-instruct-v0.3.Q8_0.gguf](https://huggingface.co/SanctumAI/Mistral-7B-Instruct-v0.3-GGUF/blob/main/mistral-7b-instruct-v0.3.Q8_0.gguf) | Q8_0 | 7.70 GB | 11.41 GB |
| [mistral-7b-instruct-v0.3.f16.gguf](https://huggingface.co/SanctumAI/Mistral-7B-Instruct-v0.3-GGUF/blob/main/mistral-7b-instruct-v0.3.f16.gguf) | f16 | 14.50 GB | 17.74 GB |
## Disclaimer
Sanctum is not the creator, originator, or owner of any Model featured in the Models section of the Sanctum application. Each Model is created and provided by third parties. Sanctum does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Model listed there. You understand that supported Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Model is the sole responsibility of the person or entity who originated such Model. Sanctum may not monitor or control the Models supported and cannot, and does not, take responsibility for any such Model. Sanctum disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Models. Sanctum further disclaims any warranty that the Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Models, your downloading of any Model, or use of any other Model provided by or through Sanctum. |
aakorolyova/reported_outcome_extraction | aakorolyova | "2022-05-25T19:31:52Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-05-18T08:32:05Z" | <h1>Model description</h1>
This is a fine-tuned BioBERT model for extracting reported outcomes (i.e. those for which results are presented) from articles reporting clinical trials.
This is the second version of the model; the original model development was reported in:
Anna Koroleva, Sanjay Kamath, Patrick Paroubek. Extracting primary and reported outcomes from articles reporting randomized controlled trials using pre-trained deep language representations. Preprint: https://easychair.org/publications/preprint/qpml
The original work was conducted within the scope of the Assisted authoring for avoiding inadequate claims in scientific reporting PhD project of the Methods for Research on Research (MiRoR, http://miror-ejd.eu/) program.
Model creator: Anna Koroleva
<h1>Intended uses & limitations</h1>
The model is intended to be used for extracting reported outcomes from texts of clinical trials.
The main limitation is that the model was trained on a fairly small sample of data annotated by a single annotator. Annotating more data or involvig more annotators was not possiblw within the PhD project.
<h1>How to use</h1>
The model should be used with the BioBERT tokeniser. A sample code for getting model predictions is below:
```
import numpy as np
from transformers import AutoTokenizer
from transformers import AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained('dmis-lab/biobert-v1.1')
model = AutoModelForTokenClassification.from_pretrained(r'aakorolyova/reported_outcome_extraction')
text = """Compared with placebo plus chemotherapy, pembrolizumab plus chemotherapy improved overall survival in patients with previously untreated, advanced oesophageal squamous cell carcinoma and PD-L1 CPS of 10 or more, and overall survival and progression-free survival in patients with oesophageal squamous cell carcinoma, PD-L1 CPS of 10 or more, and in all randomised patients regardless of histology, and had a manageable safety profile in the total as-treated population."""
encoded_input = tokenizer(text, padding=True, truncation=True, max_length=2000, return_tensors='pt')
output = model(**encoded_input)['logits']
output = np.argmax(output.detach().numpy(), axis=2)
print(output)
```
Some more useful functions can be found in or Github repository: https://github.com/aakorolyova/DeSpin-2.0
<h1>Training data</h1>
Training data can be found in https://github.com/aakorolyova/DeSpin-2.0/tree/main/data/Reported_Outcomes
<h1>Training procedure</h1>
The model was fine-tuned using Huggingface Trainer API. Training scripts can be found in https://github.com/aakorolyova/DeSpin-2.0
<h1>Evaluation</h1>
Precision: 65.57%
Recall: 74.77%
F1: 69.87% |
jlbaker361/ddpogan_512_humans_50_0_0 | jlbaker361 | "2024-10-08T23:32:12Z" | 30 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-10-08T12:19:31Z" | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers pipeline that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Sumail/Goat_Derrick03 | Sumail | "2024-03-29T07:27:14Z" | 91 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:stabilityai/stablelm-2-zephyr-1_6b",
"base_model:finetune:stabilityai/stablelm-2-zephyr-1_6b",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-29T07:25:30Z" | ---
base_model:
- stabilityai/stablelm-2-zephyr-1_6b
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [stabilityai/stablelm-2-zephyr-1_6b](https://huggingface.co/stabilityai/stablelm-2-zephyr-1_6b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: stabilityai/stablelm-2-zephyr-1_6b
layer_range: [0, 24]
- model: stabilityai/stablelm-2-zephyr-1_6b
layer_range: [0, 24]
merge_method: slerp
base_model: stabilityai/stablelm-2-zephyr-1_6b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
```
|
shhyamMS/whisper-small-en | shhyamMS | "2023-06-02T11:44:52Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-06-02T06:59:12Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: whisper-small-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-en
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.7703
- eval_wer: 66.9405
- eval_runtime: 274.8213
- eval_samples_per_second: 1.401
- eval_steps_per_second: 0.178
- epoch: 83.33
- step: 2000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
andrewcyeow/phishing_url_model | andrewcyeow | "2024-11-22T22:19:21Z" | 104 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-22T22:15:33Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/unsloth_-_llama-2-7b-bnb-4bit-4bits | RichardErkhov | "2024-05-04T02:08:26Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-05-04T02:02:25Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-2-7b-bnb-4bit - bnb 4bits
- Model creator: https://huggingface.co/unsloth/
- Original model: https://huggingface.co/unsloth/llama-2-7b-bnb-4bit/
Original model description:
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- unsloth
- transformers
- llama
- llama2
- llama-2
---
# Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!
Directly quantized 4bit model with `bitsandbytes`.
We have a Google Colab Tesla T4 notebook for Llama 7b here: https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less |
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less |
| **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
|
HawtStrokes/llama-3-8b-Instruct-bnb-4bit-HawtStrokes-yen-finetuned | HawtStrokes | "2025-03-08T19:01:38Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-08T18:58:53Z" | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** HawtStrokes
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kalita/videomae-small-finetuned-ssv2-finetuned-traffic-dataset-mae | kalita | "2024-04-03T19:15:10Z" | 62 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-small-finetuned-ssv2",
"base_model:finetune:MCG-NJU/videomae-small-finetuned-ssv2",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | "2024-04-03T18:32:59Z" | ---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-small-finetuned-ssv2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-small-finetuned-ssv2-finetuned-traffic-dataset-mae
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-small-finetuned-ssv2-finetuned-traffic-dataset-mae
This model is a fine-tuned version of [MCG-NJU/videomae-small-finetuned-ssv2](https://huggingface.co/MCG-NJU/videomae-small-finetuned-ssv2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0486
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 448
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4852 | 0.12 | 56 | 0.4753 | 0.7143 |
| 0.6294 | 1.12 | 112 | 0.0372 | 1.0 |
| 0.4141 | 2.12 | 168 | 0.0060 | 1.0 |
| 0.2121 | 3.12 | 224 | 0.0062 | 1.0 |
| 0.8881 | 4.12 | 280 | 0.0046 | 1.0 |
| 0.3003 | 5.12 | 336 | 0.0054 | 1.0 |
| 0.1027 | 6.12 | 392 | 0.1611 | 0.9286 |
| 0.0029 | 7.12 | 448 | 0.0898 | 0.9286 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.0.1+cu118
- Datasets 2.1.0
- Tokenizers 0.15.2
|
atsuki-yamaguchi/gemma-2-9b-si-30K-50-align | atsuki-yamaguchi | "2024-09-17T09:31:24Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"si",
"arxiv:2406.11477",
"base_model:google/gemma-2-9b",
"base_model:finetune:google/gemma-2-9b",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | "2024-09-06T20:28:24Z" |
---
license: gemma
language:
- si
base_model: google/gemma-2-9b
library_name: transformers
---
# Gemma2 9B for Sinhala: 50 target vocabulary size + Align target vocabulary initialization + 2x2LS/MTP/512 training
This model is built on top of Gemma2 9B adapted for Sinhala using 30K target language sentences sampled from CC-100.
## Model Details
* **Vocabulary**: This model has an additional 50 target vocabulary.
* **Target vocabulary initialization**: The target weights of the embedding were initialized using Align initialization.
* **Training**: This model was additionally pre-trained on 30K target language sentences sampled from CC-100. The training was conducted with the 2x2LS/MTP/512 strategies introduced in the paper.
## Model Description
- **Language:** Sinhala
- **License:** Gemma Terms of Use
- **Fine-tuned from model:** google/gemma-2-9b
## Model Sources
- **Repository:** https://github.com/gucci-j/lowres-cve
- **Paper:** https://arxiv.org/abs/2406.11477
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/gemma-2-9b-si-30K-50-align"
)
tokenizer = AutoTokenizer.from_pretrained(
"atsuki-yamaguchi/gemma-2-9b-si-30K-50-align"
)
```
## Citation
```
@article{yamaguchi-etal-2024-effectively,
title={How Can We Effectively Expand the Vocabulary of LLMs with 0.01GB of Target Language Text?},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
year={2024},
journal={ArXiv},
year={2024},
volume={abs/2406.11477},
url={https://arxiv.org/abs/2406.11477},
}
```
|
mradermacher/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct-GGUF | mradermacher | "2025-03-29T05:12:20Z" | 253 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:amadeusai/AV-MI-Qwen2.5-0.5B-PT-BR-Instruct",
"base_model:quantized:amadeusai/AV-MI-Qwen2.5-0.5B-PT-BR-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-19T21:32:48Z" | ---
base_model: amadeusai/AV-MI-Qwen2.5-0.5B-PT-BR-Instruct
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/amadeusai/AV-MI-Qwen2.5-0.5B-PT-BR-Instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct-GGUF/resolve/main/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct-GGUF/resolve/main/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct-GGUF/resolve/main/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct-GGUF/resolve/main/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct-GGUF/resolve/main/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct-GGUF/resolve/main/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct-GGUF/resolve/main/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct-GGUF/resolve/main/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct-GGUF/resolve/main/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct-GGUF/resolve/main/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct-GGUF/resolve/main/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct-GGUF/resolve/main/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
StepLaw/StepLaw-N_268M-D_79.0B-LR9.766e-04-BS393216 | StepLaw | "2025-04-10T01:04:38Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"step1",
"text-generation",
"StepLaw",
"causal-lm",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-10T01:02:44Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
featherless-ai-quants/ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-GGUF | featherless-ai-quants | "2025-02-13T01:12:26Z" | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:ChaoticNeutrals/Captain-Eris_Twilight-V0.420-12B",
"base_model:quantized:ChaoticNeutrals/Captain-Eris_Twilight-V0.420-12B",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-02-13T00:59:06Z" | ---
base_model: ChaoticNeutrals/Captain-Eris_Twilight-V0.420-12B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# ChaoticNeutrals/Captain-Eris_Twilight-V0.420-12B GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-GGUF/blob/main/ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-GGUF/blob/main/ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-GGUF/blob/main/ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-GGUF/blob/main/ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-GGUF/blob/main/ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-GGUF/blob/main/ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-GGUF/blob/main/ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-GGUF/blob/main/ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-GGUF/blob/main/ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-GGUF/blob/main/ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-GGUF/blob/main/ChaoticNeutrals-Captain-Eris_Twilight-V0.420-12B-Q8_0.gguf) | 12419.10 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
quandh155/Llama-3.2-1B-Instruct-ald | quandh155 | "2025-03-07T04:10:12Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"region:us"
] | null | "2025-03-07T01:48:11Z" | ---
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-1B-Instruct
tags:
- generated_from_trainer
model-index:
- name: Llama-3.2-1B-Instruct-ald
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.2-1B-Instruct-ald
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.4828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 6.4828 |
| No log | 2.0 | 2 | 6.4828 |
| No log | 3.0 | 3 | 6.4828 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0 |
ITT-AF/ITT-42dot_LLM-PLM-1.3B-v5.0 | ITT-AF | "2024-03-05T02:03:49Z" | 57 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-05T01:18:30Z" | ---
license: cc-by-nc-4.0
---
# ITT-AF/ITT-42dot_LLM-PLM-1.3B-v5.0
This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an custom dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.0.0
- Tokenizers 0.15.0 |
mempet/merged_gemma_math_005_synup1 | mempet | "2024-12-18T20:56:39Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-18T20:54:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dvbeckham7/sd-class-butterflies-32 | dvbeckham7 | "2024-08-29T03:14:42Z" | 45 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | "2024-08-29T03:14:33Z" | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('dvbeckham7/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
Subsets and Splits