modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-12 18:26:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 422
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-12 18:26:09
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/WizardLM-30B-Uncensored-i1-GGUF | mradermacher | "2024-11-13T07:24:24Z" | 345 | 0 | transformers | [
"transformers",
"gguf",
"uncensored",
"en",
"dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered",
"base_model:cognitivecomputations/WizardLM-30B-Uncensored",
"base_model:quantized:cognitivecomputations/WizardLM-30B-Uncensored",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2024-11-12T18:21:18Z" | ---
base_model: cognitivecomputations/WizardLM-30B-Uncensored
datasets:
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- uncensored
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/cognitivecomputations/WizardLM-30B-Uncensored
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-i1-GGUF/resolve/main/WizardLM-30B-Uncensored.i1-IQ1_S.gguf) | i1-IQ1_S | 7.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-i1-GGUF/resolve/main/WizardLM-30B-Uncensored.i1-IQ1_M.gguf) | i1-IQ1_M | 7.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-i1-GGUF/resolve/main/WizardLM-30B-Uncensored.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-i1-GGUF/resolve/main/WizardLM-30B-Uncensored.i1-IQ2_XS.gguf) | i1-IQ2_XS | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-i1-GGUF/resolve/main/WizardLM-30B-Uncensored.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-i1-GGUF/resolve/main/WizardLM-30B-Uncensored.i1-IQ2_M.gguf) | i1-IQ2_M | 11.3 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-i1-GGUF/resolve/main/WizardLM-30B-Uncensored.i1-Q2_K.gguf) | i1-Q2_K | 12.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-i1-GGUF/resolve/main/WizardLM-30B-Uncensored.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-i1-GGUF/resolve/main/WizardLM-30B-Uncensored.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-i1-GGUF/resolve/main/WizardLM-30B-Uncensored.i1-IQ3_S.gguf) | i1-IQ3_S | 14.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-i1-GGUF/resolve/main/WizardLM-30B-Uncensored.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-i1-GGUF/resolve/main/WizardLM-30B-Uncensored.i1-IQ3_M.gguf) | i1-IQ3_M | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-i1-GGUF/resolve/main/WizardLM-30B-Uncensored.i1-Q3_K_M.gguf) | i1-Q3_K_M | 15.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-i1-GGUF/resolve/main/WizardLM-30B-Uncensored.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-i1-GGUF/resolve/main/WizardLM-30B-Uncensored.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-i1-GGUF/resolve/main/WizardLM-30B-Uncensored.i1-Q4_0.gguf) | i1-Q4_0 | 18.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-i1-GGUF/resolve/main/WizardLM-30B-Uncensored.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-i1-GGUF/resolve/main/WizardLM-30B-Uncensored.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-i1-GGUF/resolve/main/WizardLM-30B-Uncensored.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.5 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-i1-GGUF/resolve/main/WizardLM-30B-Uncensored.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.1 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-i1-GGUF/resolve/main/WizardLM-30B-Uncensored.i1-Q6_K.gguf) | i1-Q6_K | 26.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
NobodyExistsOnTheInternet/tinystoriesmixtraltesttrain | NobodyExistsOnTheInternet | "2023-12-29T12:24:35Z" | 3 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2023-12-28T15:41:22Z" | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0
|
rishipatel92/ppo-LunarLander-v2 | rishipatel92 | "2023-03-10T03:59:06Z" | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | "2022-12-25T11:54:39Z" | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -228.76 +/- 140.11
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
zhiyil/roberta-base-finetuned-intent-ipu | zhiyil | "2022-12-16T12:36:13Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"optimum_graphcore",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:snips_built_in_intents",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-12-16T11:23:10Z" | ---
license: mit
tags:
- generated_from_trainer
datasets:
- snips_built_in_intents
metrics:
- accuracy
model-index:
- name: roberta-base-finetuned-intent-ipu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-intent-ipu
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the snips_built_in_intents dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1503
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- total_eval_batch_size: 5
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- training precision: Mixed Precision
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2478 | 1.0 | 75 | 0.6069 | 0.96 |
| 0.2522 | 2.0 | 150 | 0.1503 | 1.0 |
| 0.0903 | 3.0 | 225 | 0.0712 | 1.0 |
| 0.0883 | 4.0 | 300 | 0.0350 | 1.0 |
| 0.0491 | 5.0 | 375 | 0.0267 | 1.0 |
| 0.0305 | 6.0 | 450 | 0.0218 | 1.0 |
| 0.0461 | 7.0 | 525 | 0.0191 | 1.0 |
| 0.039 | 8.0 | 600 | 0.0174 | 1.0 |
| 0.0337 | 9.0 | 675 | 0.0166 | 1.0 |
| 0.0164 | 10.0 | 750 | 0.0162 | 1.0 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.0+cpu
- Datasets 2.7.1
- Tokenizers 0.12.0
|
jonatasgrosman/exp_w2v2t_it_vp-es_s878 | jonatasgrosman | "2022-07-08T20:47:26Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"it",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-07-08T20:47:00Z" | ---
language:
- it
license: apache-2.0
tags:
- automatic-speech-recognition
- it
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_it_vp-es_s878
Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
lafayettecreditrepair/Credit-Repair-Services-Lafayette | lafayettecreditrepair | "2022-10-26T13:08:33Z" | 0 | 0 | null | [
"region:us"
] | null | "2022-10-26T13:07:58Z" | We are a family-owned and operated Credit Repair company, founded in 2013. Our goal is to help you achieve financial success and reach your credit goals.
We’re not your average credit repair firm, we truly care, so we only charge for the items we pursue on your report. Not only does this make us one of the FASTEST credit restoration companies, but we’re also one of the most affordable.
Follow this [link](https://lafayette.asapcreditrepairusa.com/) |
jondurbin/airocoder-34b-2.1 | jondurbin | "2023-08-31T14:38:12Z" | 1,431 | 4 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-08-30T23:52:19Z" | ---
license: llama2
---
codellama-34b fine-tuned on the "code" expert from lmoe adapters. |
logasja/instagram-ginza | logasja | "2025-02-20T15:47:27Z" | 8 | 0 | keras | [
"keras",
"adversarial",
"aesthetic",
"quality",
"filter",
"image-to-image",
"dataset:logasja/FDF",
"base_model:logasja/ArcFace",
"base_model:finetune:logasja/ArcFace",
"license:gpl-3.0",
"region:us"
] | image-to-image | "2025-02-17T17:47:33Z" | ---
library_name: keras
widget:
- text: input
output:
url: ./assets/input.png
- text: target
output:
url: ./assets/target.png
- text: output
output:
url: ./assets/output.png
tags:
- adversarial
- aesthetic
- quality
- filter
metrics:
- TopIQ-FR
- ArcFace Cosine Distance
license: gpl-3.0
base_model:
- vnet
- logasja/ArcFace
pipeline_tag: image-to-image
datasets:
- logasja/FDF
---
<Gallery />
Training logs [here](https://wandb.ai/spuds/auramask/runs/a12aef0a8ae82a31a052485a383c5d95)
# Model Description
This model uses a modified vnet for 2D input/output implemented [here](https://github.com/logasja/keras3-unets) with the following configuration.
```json
{
"activation": "ReLU",
"batch_norm": false,
"filter_num": [
64,
128,
256,
512,
512
],
"n_labels": 3,
"output_activation": "tanh",
"pool": false,
"res_num_ini": 1,
"res_num_max": 3,
"unpool": false
}
```
```json
{
"alpha": 0.0001,
"batch": 64,
"epochs": 500,
"epsilon": 1,
"input": "(256, 256)",
"losses": {
"FEAT_ArcFace": {
"d": "cosine_similarity",
"f": "ArcFace",
"name": "FEAT_ArcFace",
"reduction": "sum_over_batch_size",
"threshold": 0.68,
"weight": 0.1
},
"TopIQ": {
"full_ref": true,
"lower_better": false,
"name": "TopIQ",
"reduction": "sum_over_batch_size",
"score_range": "~0, ~1",
"weight": 0.9
},
"mean_squared_error": {
"name": "mean_squared_error",
"reduction": "sum_over_batch_size",
"weight": 0.1
}
},
"mixed_precision": true,
"optimizer": {
"amsgrad": false,
"beta_1": 0.9,
"beta_2": 0.999,
"clipnorm": null,
"clipvalue": null,
"ema_momentum": 0.99,
"ema_overwrite_frequency": null,
"epsilon": 1e-07,
"global_clipnorm": null,
"gradient_accumulation_steps": null,
"learning_rate": 9.999999747378752e-05,
"loss_scale_factor": null,
"name": "adamw",
"use_ema": false,
"weight_decay": 0.004
},
"seed": "BIIIIIGSTRETCH",
"testing": 0.01,
"training": 0.99
}
```
## Model Architecture Plot
 |
Myashka/gpt-imdb-sigmoid-beta_0.1 | Myashka | "2023-12-06T21:00:11Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:lvwerra/gpt2-imdb",
"base_model:finetune:lvwerra/gpt2-imdb",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-05T15:10:55Z" | ---
base_model: lvwerra/gpt2-imdb
tags:
- generated_from_trainer
model-index:
- name: gpt-imdb-sigmoid-beta_0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-imdb-sigmoid-beta_0.1
This model is a fine-tuned version of [lvwerra/gpt2-imdb](https://huggingface.co/lvwerra/gpt2-imdb) on an unknown dataset.
It achieves the following results on the evaluation set:
- Step: 7000
- Loss: 0.1445
- Rewards/chosen: -5.6156
- Rewards/rejected: -11.9139
- Rewards/accuracies: 0.9354
- Rewards/margins: 6.2982
- Logps/rejected: -382.8238
- Logps/chosen: -291.4216
- Logits/rejected: -44.3728
- Logits/chosen: -46.3321
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 150
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.2741 | 0.21 | 500 | 0.3546 | -0.7644 | -2.6310 | 0.8604 | 1.8666 | -289.9951 | -242.9089 | -34.2705 | -35.4568 |
| 0.3403 | 0.42 | 1000 | 0.2963 | -1.6755 | -4.3008 | 0.8687 | 2.6253 | -306.6930 | -252.0203 | -40.9205 | -42.3105 |
| 0.1939 | 0.63 | 1500 | 0.2596 | -3.1297 | -6.7295 | 0.8771 | 3.5998 | -330.9802 | -266.5624 | -37.6829 | -39.1821 |
| 0.2094 | 0.83 | 2000 | 0.1941 | -2.9414 | -6.9143 | 0.9292 | 3.9728 | -332.8280 | -264.6796 | -38.0792 | -39.7464 |
| 0.1481 | 1.04 | 2500 | 0.1744 | -3.7473 | -8.3469 | 0.9333 | 4.5996 | -347.1542 | -272.7383 | -40.9252 | -42.5164 |
| 0.2862 | 1.25 | 3000 | 0.1750 | -4.5825 | -9.7147 | 0.9292 | 5.1322 | -360.8324 | -281.0905 | -41.9790 | -44.0717 |
| 0.304 | 1.46 | 3500 | 0.1652 | -4.3291 | -9.8200 | 0.9333 | 5.4909 | -361.8853 | -278.5559 | -44.1786 | -46.1418 |
| 0.2167 | 1.67 | 4000 | 0.1580 | -4.6175 | -10.0305 | 0.9354 | 5.4130 | -363.9903 | -281.4398 | -43.6324 | -45.4854 |
| 0.1396 | 1.88 | 4500 | 0.1518 | -4.5940 | -10.1635 | 0.9396 | 5.5696 | -365.3205 | -281.2049 | -41.9461 | -43.8060 |
| 0.1575 | 2.08 | 5000 | 0.1525 | -5.3119 | -11.3685 | 0.9292 | 6.0566 | -377.3703 | -288.3840 | -43.4045 | -45.2127 |
| 0.0338 | 2.29 | 5500 | 0.1472 | -5.2545 | -11.3863 | 0.9333 | 6.1319 | -377.5485 | -287.8099 | -43.2283 | -45.1626 |
| 0.1631 | 2.5 | 6000 | 0.1496 | -5.6862 | -11.9852 | 0.9333 | 6.2991 | -383.5375 | -292.1269 | -43.6007 | -45.5693 |
| 0.1177 | 2.71 | 6500 | 0.1473 | -5.6329 | -11.9588 | 0.9417 | 6.3259 | -383.2729 | -291.5939 | -44.3503 | -46.3168 |
| 0.2342 | 2.92 | 7000 | **0.1445** | -5.6156 | -11.9139 | 0.9354 | 6.2982 | -382.8238 | -291.4216 | -44.3728 | -46.3321 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1
- Datasets 2.15.0
- Tokenizers 0.15.0
|
havinash-ai/052480e5-2a5b-4558-a13e-feb601d6e81c | havinash-ai | "2025-02-15T11:55:28Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-random-GemmaForCausalLM",
"base_model:adapter:fxmarty/tiny-random-GemmaForCausalLM",
"license:mit",
"region:us"
] | null | "2025-02-15T11:53:02Z" | ---
library_name: peft
license: mit
base_model: fxmarty/tiny-random-GemmaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 052480e5-2a5b-4558-a13e-feb601d6e81c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 052480e5-2a5b-4558-a13e-feb601d6e81c
This model is a fine-tuned version of [fxmarty/tiny-random-GemmaForCausalLM](https://huggingface.co/fxmarty/tiny-random-GemmaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 12.1377
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Hilbs/Packaging-Model | Hilbs | "2023-05-31T00:54:41Z" | 189 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"en",
"dataset:Hilbs/Packaging",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-05-31T00:05:28Z" | ---
datasets:
- Hilbs/Packaging
language:
- en
--- |
JustFrederik/nllb-200-1.3B-ct2 | JustFrederik | "2023-05-14T21:55:00Z" | 1 | 0 | transformers | [
"transformers",
"nllb",
"translation",
"ace",
"acm",
"acq",
"aeb",
"af",
"ajp",
"ak",
"als",
"am",
"apc",
"ar",
"ars",
"ary",
"arz",
"as",
"ast",
"awa",
"ayr",
"azb",
"azj",
"ba",
"bm",
"ban",
"be",
"bem",
"bn",
"bho",
"bjn",
"bo",
"bs",
"bug",
"bg",
"ca",
"ceb",
"cs",
"cjk",
"ckb",
"crh",
"cy",
"da",
"de",
"dik",
"dyu",
"dz",
"el",
"en",
"eo",
"et",
"eu",
"ee",
"fo",
"fj",
"fi",
"fon",
"fr",
"fur",
"fuv",
"gaz",
"gd",
"ga",
"gl",
"gn",
"gu",
"ht",
"ha",
"he",
"hi",
"hne",
"hr",
"hu",
"hy",
"ig",
"ilo",
"id",
"is",
"it",
"jv",
"ja",
"kab",
"kac",
"kam",
"kn",
"ks",
"ka",
"kk",
"kbp",
"kea",
"khk",
"km",
"ki",
"rw",
"ky",
"kmb",
"kmr",
"knc",
"kg",
"ko",
"lo",
"lij",
"li",
"ln",
"lt",
"lmo",
"ltg",
"lb",
"lua",
"lg",
"luo",
"lus",
"lvs",
"mag",
"mai",
"ml",
"mar",
"min",
"mk",
"mt",
"mni",
"mos",
"mi",
"my",
"nl",
"nn",
"nb",
"npi",
"nso",
"nus",
"ny",
"oc",
"ory",
"pag",
"pa",
"pap",
"pbt",
"pes",
"plt",
"pl",
"pt",
"prs",
"quy",
"ro",
"rn",
"ru",
"sg",
"sa",
"sat",
"scn",
"shn",
"si",
"sk",
"sl",
"sm",
"sn",
"sd",
"so",
"st",
"es",
"sc",
"sr",
"ss",
"su",
"sv",
"swh",
"szl",
"ta",
"taq",
"tt",
"te",
"tg",
"tl",
"th",
"ti",
"tpi",
"tn",
"ts",
"tk",
"tum",
"tr",
"tw",
"tzm",
"ug",
"uk",
"umb",
"ur",
"uzn",
"vec",
"vi",
"war",
"wo",
"xh",
"ydd",
"yo",
"yue",
"zh",
"zsm",
"zu",
"dataset:flores-200",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | translation | "2023-05-14T21:38:28Z" | ---
language:
- ace
- acm
- acq
- aeb
- af
- ajp
- ak
- als
- am
- apc
- ar
- ars
- ary
- arz
- as
- ast
- awa
- ayr
- azb
- azj
- ba
- bm
- ban
- be
- bem
- bn
- bho
- bjn
- bo
- bs
- bug
- bg
- ca
- ceb
- cs
- cjk
- ckb
- crh
- cy
- da
- de
- dik
- dyu
- dz
- el
- en
- eo
- et
- eu
- ee
- fo
- fj
- fi
- fon
- fr
- fur
- fuv
- gaz
- gd
- ga
- gl
- gn
- gu
- ht
- ha
- he
- hi
- hne
- hr
- hu
- hy
- ig
- ilo
- id
- is
- it
- jv
- ja
- kab
- kac
- kam
- kn
- ks
- ka
- kk
- kbp
- kea
- khk
- km
- ki
- rw
- ky
- kmb
- kmr
- knc
- kg
- ko
- lo
- lij
- li
- ln
- lt
- lmo
- ltg
- lb
- lua
- lg
- luo
- lus
- lvs
- mag
- mai
- ml
- mar
- min
- mk
- mt
- mni
- mos
- mi
- my
- nl
- nn
- nb
- npi
- nso
- nus
- ny
- oc
- ory
- pag
- pa
- pap
- pbt
- pes
- plt
- pl
- pt
- prs
- quy
- ro
- rn
- ru
- sg
- sa
- sat
- scn
- shn
- si
- sk
- sl
- sm
- sn
- sd
- so
- st
- es
- sc
- sr
- ss
- su
- sv
- swh
- szl
- ta
- taq
- tt
- te
- tg
- tl
- th
- ti
- tpi
- tn
- ts
- tk
- tum
- tr
- tw
- tzm
- ug
- uk
- umb
- ur
- uzn
- vec
- vi
- war
- wo
- xh
- ydd
- yo
- yue
- zh
- zsm
- zu
language_details: "ace_Arab, ace_Latn, acm_Arab, acq_Arab, aeb_Arab, afr_Latn, ajp_Arab, aka_Latn, amh_Ethi, apc_Arab, arb_Arab, ars_Arab, ary_Arab, arz_Arab, asm_Beng, ast_Latn, awa_Deva, ayr_Latn, azb_Arab, azj_Latn, bak_Cyrl, bam_Latn, ban_Latn,bel_Cyrl, bem_Latn, ben_Beng, bho_Deva, bjn_Arab, bjn_Latn, bod_Tibt, bos_Latn, bug_Latn, bul_Cyrl, cat_Latn, ceb_Latn, ces_Latn, cjk_Latn, ckb_Arab, crh_Latn, cym_Latn, dan_Latn, deu_Latn, dik_Latn, dyu_Latn, dzo_Tibt, ell_Grek, eng_Latn, epo_Latn, est_Latn, eus_Latn, ewe_Latn, fao_Latn, pes_Arab, fij_Latn, fin_Latn, fon_Latn, fra_Latn, fur_Latn, fuv_Latn, gla_Latn, gle_Latn, glg_Latn, grn_Latn, guj_Gujr, hat_Latn, hau_Latn, heb_Hebr, hin_Deva, hne_Deva, hrv_Latn, hun_Latn, hye_Armn, ibo_Latn, ilo_Latn, ind_Latn, isl_Latn, ita_Latn, jav_Latn, jpn_Jpan, kab_Latn, kac_Latn, kam_Latn, kan_Knda, kas_Arab, kas_Deva, kat_Geor, knc_Arab, knc_Latn, kaz_Cyrl, kbp_Latn, kea_Latn, khm_Khmr, kik_Latn, kin_Latn, kir_Cyrl, kmb_Latn, kon_Latn, kor_Hang, kmr_Latn, lao_Laoo, lvs_Latn, lij_Latn, lim_Latn, lin_Latn, lit_Latn, lmo_Latn, ltg_Latn, ltz_Latn, lua_Latn, lug_Latn, luo_Latn, lus_Latn, mag_Deva, mai_Deva, mal_Mlym, mar_Deva, min_Latn, mkd_Cyrl, plt_Latn, mlt_Latn, mni_Beng, khk_Cyrl, mos_Latn, mri_Latn, zsm_Latn, mya_Mymr, nld_Latn, nno_Latn, nob_Latn, npi_Deva, nso_Latn, nus_Latn, nya_Latn, oci_Latn, gaz_Latn, ory_Orya, pag_Latn, pan_Guru, pap_Latn, pol_Latn, por_Latn, prs_Arab, pbt_Arab, quy_Latn, ron_Latn, run_Latn, rus_Cyrl, sag_Latn, san_Deva, sat_Beng, scn_Latn, shn_Mymr, sin_Sinh, slk_Latn, slv_Latn, smo_Latn, sna_Latn, snd_Arab, som_Latn, sot_Latn, spa_Latn, als_Latn, srd_Latn, srp_Cyrl, ssw_Latn, sun_Latn, swe_Latn, swh_Latn, szl_Latn, tam_Taml, tat_Cyrl, tel_Telu, tgk_Cyrl, tgl_Latn, tha_Thai, tir_Ethi, taq_Latn, taq_Tfng, tpi_Latn, tsn_Latn, tso_Latn, tuk_Latn, tum_Latn, tur_Latn, twi_Latn, tzm_Tfng, uig_Arab, ukr_Cyrl, umb_Latn, urd_Arab, uzn_Latn, vec_Latn, vie_Latn, war_Latn, wol_Latn, xho_Latn, ydd_Hebr, yor_Latn, yue_Hant, zho_Hans, zho_Hant, zul_Latn"
tags:
- nllb
- translation
license: "cc-by-nc-4.0"
datasets:
- flores-200
metrics:
- bleu
- spbleu
- chrf++
---
https://huggingface.co/facebook/nllb-200-1.3B
```
ct2-transformers-converter --model facebook/nllb-200-1.3B --output_dir converted/nllb-200-1.3B-ct2
``` |
kiki-ailab/Qwen2.5-3B-Instruct-KAI | kiki-ailab | "2025-03-10T05:40:39Z" | 11 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"vi",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-19T03:58:27Z" | ---
license: other
license_link: https://huggingface.co/kiki-ailab/Qwen2.5-3B-Instruct-KAI/blob/main/LICENSE.txt
library_name: transformers
base_model:
- Qwen/Qwen2.5-3B-Instruct
language:
- vi
---
# Qwen2.5-3B-Instruct-KAI
## Instruction
Llama3.2-1B-Instruct-KAI, Llama3.2-3B-Instruct-KAI, Qwen2.5-0.5B-Instruct-KAI, Qwen2.5-1.5B-Instruct-KAI, and Qwen2.5-3B-Instruct-KAI are a collection of models fine-tuned on the open Qwen2.5* and Llama3.2* models. They are optimized for Vietnamese language understanding and generation tasks such as reading comprehension, information extraction, question answering and summarization.
## Quickstart
This is a demonstration of loading a model and performing a question-answering or summarization task.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "kiki-ailab/Qwen2.5-3B-Instruct-KAI"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Xin chào !"
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### Examples
**Example 1**:
```python
prompt = """Dưới đây là một số tài liệu / văn bản:
<DOC id="doc-1">
Theo một nghiên cứu gần đây, biến đổi khí hậu đã làm gia tăng tần suất và cường độ của các hiện tượng thời tiết cực đoan, bao gồm bão, hạn hán và lũ lụt. Các khu vực ven biển Đông Nam Á có nguy cơ cao nhất do nước biển dâng và hiện tượng xâm nhập mặn.
</DOC>
<DOC id="doc-2">
Một báo cáo từ Ngân hàng Thế giới cho thấy rằng biến đổi khí hậu sẽ ảnh hưởng nghiêm trọng đến sản xuất nông nghiệp, đặc biệt là ở các nước đang phát triển, nơi nền kinh tế phụ thuộc lớn vào nông nghiệp. Cụ thể, năng suất cây trồng có thể giảm từ 10% đến 25% trong 30 năm tới.
</DOC>
<DOC id="doc-3">
Một sáng kiến quốc tế đã được khởi động nhằm giảm thiểu tác động của biến đổi khí hậu thông qua việc thúc đẩy sử dụng năng lượng tái tạo và giảm phát thải carbon. Các nước phát triển đã cam kết hỗ trợ tài chính cho các quốc gia dễ bị tổn thương nhất, nhưng việc triển khai vẫn gặp nhiều thách thức.
</DOC>
TASK: Hãy trả lời câu hỏi "Biến đổi khí hậu ảnh hưởng như thế nào đến nông nghiệp ở các nước đang phát triển?"
INSTRUCTION:
1. Câu trả lời không quá 50 từ.
2. Trích dẫn rõ ràng tài liệu nào chứa thông tin liên quan, theo format: [doc-k]"""
```
**Example 2:**
```python
prompt = """Trả lời câu hỏi dựa vào nội dung đoạn văn sau:
====
Bão Milton bắt đầu đổ bộ vào Siesta Key, bang Florida, Mỹ, với sức gió 193 km/h, tương đương cấp 3 trong thang đo bão 5 cấp, vào khoảng 20h30 ngày 9/10 (7h30 sáng 10/10 giờ Hà Nội). Sau vài tiếng càn quét qua Florida, bão Milton hạ xuống cấp 2 và tiếp tục hạ xuống cấp 1 vào rạng sáng 10/10.
Đây là cơn bão thứ năm ở Mỹ và cơn bão thứ ba tấn công bang Florida trong năm nay. Trước khi bão Milton đổ bộ, Thống đốc Florida Ron DeSantis cho biết ít nhất 19 cơn lốc xoáy đã xuất hiện ở Florida và 116 cảnh báo lốc xoáy được ban bố khắp bang.
Mưa lớn xảy ra ở các khu vực, nhất là thành phố St. Petersburg khi hứng chịu "trận mưa nghìn năm có một", với lượng mưa trút xuống thành phố trong ba giờ tương đương ba tháng trong năm. Các thành phố McKay Creek, Clearwater Beach và Temple Terrace cũng ghi nhận lượng mưa lớn, lần lượt là 371 mm, 355 mm và 344 mm.
====
Yêu cầu câu trả lời hoặc là được trích ra từ đoạn văn, hoặc là 'NO ANSWER' nếu nội dung đoạn văn không liên quan đến câu hỏi.
Câu hỏi: Bão Milton mạnh như thế nào ? Diễn ra ở đâu ?
Câu trả lời:"""
```
**Example 3**:
```python
prompt = """Cho văn bản dưới đây:
====
Bão Milton bắt đầu đổ bộ vào Siesta Key, bang Florida, Mỹ, với sức gió 193 km/h, tương đương cấp 3 trong thang đo bão 5 cấp, vào khoảng 20h30 ngày 9/10 (7h30 sáng 10/10 giờ Hà Nội). Sau vài tiếng càn quét qua Florida, bão Milton hạ xuống cấp 2 và tiếp tục hạ xuống cấp 1 vào rạng sáng 10/10.
Đây là cơn bão thứ năm ở Mỹ và cơn bão thứ ba tấn công bang Florida trong năm nay. Trước khi bão Milton đổ bộ, Thống đốc Florida Ron DeSantis cho biết ít nhất 19 cơn lốc xoáy đã xuất hiện ở Florida và 116 cảnh báo lốc xoáy được ban bố khắp bang.
Mưa lớn xảy ra ở các khu vực, nhất là thành phố St. Petersburg khi hứng chịu "trận mưa nghìn năm có một", với lượng mưa trút xuống thành phố trong ba giờ tương đương ba tháng trong năm. Các thành phố McKay Creek, Clearwater Beach và Temple Terrace cũng ghi nhận lượng mưa lớn, lần lượt là 371 mm, 355 mm và 344 mm.
====
TASK: Đặt tiêu đề và tóm tắt bài báo trên thành 1-2 câu."""
```
## Benchmarks
### VMLU
We evaluate our fine-tuned models on VMLU benchmarks provided by https://vmlu.ai
| Model | VMLU | ViSquad | ViDrop | ViDialog |
|--------------------------|--------------|--------------|--------------|--------------|
| Llama3.2-1B-Instruct | 37.6 | 70.1 | 29.6 | 33.9 |
| Llama3.2-3B-Instruct | 47.6 | 90.3 | 63.5 | 50.8 |
| Qwen2.5-0.5B-Instruct | 39.1 | 62.5 | 31.5 | 28.0 |
| Qwen2.5-1.5B-Instruct | 48.6 | 86.7 | 54.5 | 39.8 |
| Qwen2.5-3B-Instruct | 52.9 | 88.3 | 72.4 | 54.4 |
| <br/>**Our finetuned models** | | | | |
| Llama3.2-1B-Instruct-KAI | 50.5 (+12.9) | 88.4 (+18.3) | 71.1 (+41.5) | 50.9 (+17.0) |
| Llama3.2-3B-Instruct-KAI | 58.1 (+10.5) | 93.5 (+3.2) | 81.4 (+17.9) | 67.3 (+16.5) |
| Qwen2.5-0.5B-Instruct-KAI | 49.7 (+10.6) | 87.3 (+24.8) | 62.3 (+30.8) | 39.0 (+11.0) |
| Qwen2.5-1.5B-Instruct-KAI | 57.5 (+8.9) | 93.3 (+6.6) | 76.0 (+21.5) | 54.6 (+14.8) |
| Qwen2.5-3B-Instruct-KAI | 63.5 (+10.6) | 94.2 (+5.9) | 80.9 (+8.5) | 68.5 (+14.1) |
### Evaluate on ArenaHard (CohereForAI)
We follow the evaluation method outlined in https://github.com/lmarena/arena-hard-auto to assess our fine-tuned models against others on the ArenaHard benchmark.
- Based model: `Qwen/Qwen2.5-7B-Instruct`
- Judge: `Qwen/Qwen2.5-72B-Instruct`
| # | model | size (B) | win | tie | lose |
| -- | -------------------------------------------- | -------- | ---- | --- | ---- |
| 1 | deepseek-ai/DeepSeek-R1-Distill-Qwen-14B | 14 | 59,5 | 4,6 | 35,9 |
| 2 | CohereForAI/aya-expanse-8b | 8 | 55 | 4,6 | 40,4 |
| 3 | Qwen/Qwen2.5-14B-Instruct | 14 | 48,7 | 9,1 | 42,2 |
| 4 | **kiki-ailab/Qwen2.5-3B-Instruct-KAI** | 3 | 38,7 | 4,7 | 56,6 |
| 5 | meta-llama/Llama3.1-8B-Instruct | 8 | 38,6 | 4,9 | 56,5 |
| 6 | CohereForAI/c4ai-command-r7b-12-2024 | 7 | 35,1 | 3,3 | 61,6 |
| 7 | **kiki-ailab/Llama3.2-3B-Instruct-KAI** | 3 | 35 | 4,3 | 60,7 |
| 8 | arcee-ai/Arcee-VyLinh | 3 | 34,8 | 5,4 | 59,8 |
| 9 | **kiki-ailab/Qwen2.5-1.5B-Instruct-KAI** | 1,5 | 28,9 | 3,9 | 67,2 |
| 10 | deepseek-ai/DeepSeek-R1-Distill-Qwen-7B | 7 | 23,2 | 2,8 | 74 |
| 11 | meta-llama/Llama-3.2-3B-Instruct | 3 | 21,2 | 4,4 | 74,4 |
| 12 | Qwen/Qwen2.5-3B-Instruct | 3 | 18,6 | 5,8 | 75,6 |
| 13 | zaloai/Llama3.2-1B-Instruct-ZAI | 1 | 17,4 | 3,7 | 78,9 |
| 14 | Viet-Mistral/Vistral-7B-Chat | 7 | 17,2 | 3,2 | 79,6 |
| 15 | **kiki-ailab/Qwen2.5-0.5B-Instruct-KAI** | 0,5 | 10,9 | 2 | 87,1 |
| 16 | meta-llama/Llama-3.2-1B-Instruct | 1 | 6,5 | 1,6 | 91,9 |
| 17 | Qwen/Qwen2.5-1.5B-Instruct | 1 | 6,4 | 3 | 90,6 |
| 18 | deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B | 1,5 | 3 | 1,5 | 95,5 |
| 19 | vinai/PhoGPT-4B-Chat | 4 | 1,2 | 2,7 | 96,1 |
| 20 | Qwen/Qwen2.5-0.5B-Instruct | 0,5 | 1 | 1,7 | 97,3 |
# Disclaimer
- Might still hallucinate on cultural-specific content.
- Primary focus on Vietnamese language understanding.
- May not perform optimally for specialized technical domains.
# Feedback
We welcome any feedback on these public models. Please send your comments to [email protected].
|
8Spark/stardream_large_model | 8Spark | "2024-12-21T06:49:25Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-21T06:45:02Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alielfilali01/Q2AW1M-0001 | alielfilali01 | "2024-06-21T20:39:37Z" | 2,905 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-21T14:23:16Z" | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
liqi03/whisper-large-v3-pl-aug | liqi03 | "2024-07-31T13:23:30Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"pl",
"dataset:google/fleurs",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-07-31T05:33:48Z" | ---
base_model: openai/whisper-large-v3
datasets:
- google/fleurs
language:
- pl
license: apache-2.0
metrics:
- wer
tags:
- hf-asr-leaderboard
- generated_from_trainer
model-index:
- name: Whisper Large V3 pl Fleurs Aug - Chee Li
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Google Fleurs
type: google/fleurs
config: pl_pl
split: None
args: 'config: pl split: test'
metrics:
- type: wer
value: 281.1154598825832
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large V3 pl Fleurs Aug - Chee Li
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the Google Fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1225
- Wer: 281.1155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.0502 | 1.2579 | 1000 | 0.1122 | 224.0774 |
| 0.0099 | 2.5157 | 2000 | 0.1146 | 344.2200 |
| 0.0033 | 3.7736 | 3000 | 0.1187 | 283.3869 |
| 0.0005 | 5.0314 | 4000 | 0.1225 | 281.1155 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
mradermacher/Mistral-7B-v0.3-sft-SPIN-self-GGUF | mradermacher | "2024-12-25T06:49:16Z" | 17 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"dpo",
"en",
"base_model:AmberYifan/Mistral-7B-v0.3-sft-SPIN-self",
"base_model:quantized:AmberYifan/Mistral-7B-v0.3-sft-SPIN-self",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-23T22:51:39Z" | ---
base_model: AmberYifan/Mistral-7B-v0.3-sft-SPIN-self
language:
- en
library_name: transformers
model_name: Mistral-7B-v0.3-sft-SPIN-self
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- dpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AmberYifan/Mistral-7B-v0.3-sft-SPIN-self
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.3-sft-SPIN-self-GGUF/resolve/main/Mistral-7B-v0.3-sft-SPIN-self.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.3-sft-SPIN-self-GGUF/resolve/main/Mistral-7B-v0.3-sft-SPIN-self.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.3-sft-SPIN-self-GGUF/resolve/main/Mistral-7B-v0.3-sft-SPIN-self.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.3-sft-SPIN-self-GGUF/resolve/main/Mistral-7B-v0.3-sft-SPIN-self.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.3-sft-SPIN-self-GGUF/resolve/main/Mistral-7B-v0.3-sft-SPIN-self.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.3-sft-SPIN-self-GGUF/resolve/main/Mistral-7B-v0.3-sft-SPIN-self.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.3-sft-SPIN-self-GGUF/resolve/main/Mistral-7B-v0.3-sft-SPIN-self.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.3-sft-SPIN-self-GGUF/resolve/main/Mistral-7B-v0.3-sft-SPIN-self.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.3-sft-SPIN-self-GGUF/resolve/main/Mistral-7B-v0.3-sft-SPIN-self.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.3-sft-SPIN-self-GGUF/resolve/main/Mistral-7B-v0.3-sft-SPIN-self.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.3-sft-SPIN-self-GGUF/resolve/main/Mistral-7B-v0.3-sft-SPIN-self.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.3-sft-SPIN-self-GGUF/resolve/main/Mistral-7B-v0.3-sft-SPIN-self.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Fatimetou/distilbert-base-uncased-finetuned-ner | Fatimetou | "2024-06-06T23:16:38Z" | 93 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-04-02T09:27:11Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0591
- Precision: 0.9266
- Recall: 0.9391
- F1: 0.9328
- Accuracy: 0.9840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2398 | 1.0 | 878 | 0.0722 | 0.8970 | 0.9149 | 0.9058 | 0.9791 |
| 0.0498 | 2.0 | 1756 | 0.0593 | 0.9202 | 0.9352 | 0.9277 | 0.9830 |
| 0.0292 | 3.0 | 2634 | 0.0591 | 0.9266 | 0.9391 | 0.9328 | 0.9840 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
LoneStriker/Higgs-Llama-3-70B-4.0bpw-h6-exl2 | LoneStriker | "2024-06-06T21:27:18Z" | 8 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"exl2",
"region:us"
] | text-generation | "2024-06-06T21:12:15Z" | ---
license: other
---
# Higgs-Llama-3-70B
Higgs-Llama-3-70B is post-trained from [meta-llama/Meta-Llama-3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B), specially tuned for role-playing while being competitive in general-domain instruction-following and reasoning.
We perform supervised fine-tuning with our in-house instruction-following and chat datasets. Afterwards, we construct preference pairs with a semi-automated pipeline that relies on both human-labelers and our private LLMs.
We conduct iterative preference optimization to align the model. During alignment, we adopted a special strategy to align the model’s behavior with the system message.
Compared with other instruct models, Higgs models follow their roles more closely.
See our [release blog](https://boson.ai/higgs-opensource/).
## Evaluation
All benchmarks lead to eventual overfitting, including those for LLMs. Training on data, particularly beneficial for benchmarks typically does not improve (or even worsen) role-playing performance. We worked to exclude benchmark data, including their training examples, from our fine-tuning data.
We highlight our results on two new and challenging benchmarks: [MMLU-Pro](https://huggingface.co/datasets/TIGER-Lab/MMLU-Pro) and [Arena-Hard](https://github.com/lm-sys/arena-hard-auto). MMLU-Pro extends the popular MMLU benchmark. We believe that it suffers from less overfitting by other released models as well, as it was released only recently (it was released after our models finished training).
### MMLU-Pro
<table class="col-12 col-md-6" width="100px">
<tr>
<td><b>Model</b></td>
<td><b>MMLU-Pro</b></td>
</tr>
<tr>
<td>GPT-4o</td>
<td>72.6</td>
</tr>
<tr>
<td>Gemini-1.5-Pro</td>
<td>69.0</td>
</tr>
<tr>
<td>Claude-3-Opus</td>
<td>68.5</td>
</tr>
<tr>
<td>GPT-4-Turbo</td>
<td>63.7</td>
</tr>
<tr style="font-weight: bold">
<td>Higgs-Llama-3-70B</td>
<td>63.2</td>
</tr>
<tr>
<td>Gemini-1.5-Flash</td>
<td>59.1</td>
</tr>
<tr>
<td>Claude-3-Sonnet</td>
<td>56.8</td>
</tr>
<tr>
<td>Llama-3-70B-Instruct</td>
<td>56.2</td>
</tr>
</table>
### Arena-Hard
<table class="col-12 col-md-6">
<tr>
<td><b>Model</b></td>
<td><b>Arena-Hard</b></td>
</tr>
<tr>
<td>GPT-4o</td>
<td>79.5</td>
</tr>
<tr>
<td>Gemini-1.5-Pro</td>
<td>72.0</td>
</tr>
<tr>
<td>Claude-3-Opus</td>
<td>60.4</td>
</tr>
<tr style="font-weight: bold">
<td>Higgs-Llama-3-70B</td>
<td>49.6</td>
</tr>
<tr>
<td>Gemini-1.5-Flash</td>
<td>49.6</td>
</tr>
<tr>
<td>Claude-3-Sonnet</td>
<td>46.8</td>
</tr>
<tr>
<td>Claude-3-Haiku</td>
<td>41.5</td>
</tr>
<tr>
<td>Llama-3-70B-Instruct</td>
<td>41.1</td>
</tr>
<tr>
<td>GPT-4-0613</td>
<td>37.9</td>
</tr>
<tr>
<td>Mistral-Large</td>
<td>37.7</td>
</tr>
</table>
## Overall Results
In the following, we compare our model's performance with `gpt-4o` and `Llama-3-70B-Instruct` on [MMLU-Pro](https://github.com/TIGER-AI-Lab/MMLU-Pro), [Arena-Hard](https://github.com/lm-sys/arena-hard-auto/tree/main), [AlpacaEval 2.0 LC](https://github.com/tatsu-lab/alpaca_eval), MMLU, GPQA and DROP. For MMLU, GPQA and DROP, we adopt [openai/simple-evals](https://github.com/openai/simple-evals) for evaluation. For the other benchmarks, we evaluate via the official implementation.
<div style="overflow: auto">
<table>
<tr>
<th></th>
<td><b>MMLU-Pro</td>
<td><b>Arena-Hard</td>
<td><b>AlpacaEval <br> 2.0 LC</b></td>
<td><b>MMLU</b></td>
<td><b>GPQA</b></td>
<td><b>DROP <br> (F1,3-shot)</b></td>
</tr>
<tr>
<td>GPT-4o</td>
<td>72.6</td>
<td>79.5*</td>
<td>57.5</td>
<td>87.2</td>
<td>49.9</td>
<td>83.7</td>
</tr>
<tr style="font-weight: bold">
<td>Higgs-Llama-3-70B</td>
<td>63.2</td>
<td>49.6</td>
<td>38.6</td>
<td>80.8</td>
<td>42.1</td>
<td>81.6</td>
</tr>
<tr>
<td>Llama-3-70B-Instruct*</td>
<td>56.2</td>
<td>41.1</td>
<td>34.4</td>
<td>80.2</td>
<td>41.3</td>
<td>81.4</td>
</tr>
</table>
</div>
<small>*For Llama-3-70B-Instruct, the MMLU-Pro number is copied from the [MMLU-Pro leaderboard](https://huggingface.co/spaces/TIGER-Lab/MMLU-Pro); the Arena-Hard numbers are copied from the [leaderboard updated on 5/21](https://github.com/lm-sys/arena-hard-auto/tree/main?tab=readme-ov-file#full-leaderboard-updated-0521) while we run gpt-4o ourselves; and the MMLU/GPQA/DROP are copied from [simple-evals](https://github.com/openai/simple-evals).</small>
## How to use
We use the same prompting format as in Meta-Llama-3-70B-Instruct.
### Use with transformers
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "bosonai/Higgs-Llama-3-70B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are an AI assistant that speaks in the style of Sheldon Cooper. You are arguing with the user and is trying to prove the opposite of what the user said."},
{"role": "user", "content": "The earth is round."},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=[
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>"),
pipeline.tokenizer.eos_token_id,
],
do_sample=True,
temperature=1.0,
top_p=0.95,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## License
[Our license](https://huggingface.co/bosonai/Higgs-Llama-3-70B/blob/main/LICENSE) is based on Meta's LLama 3 Community License. |
albertus-sussex/veriscrape-sbert-book-reference_8_to_verify_2-fold-5 | albertus-sussex | "2025-03-24T12:14:46Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:20032",
"loss:AttributeTripletLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:Alibaba-NLP/gte-base-en-v1.5",
"base_model:finetune:Alibaba-NLP/gte-base-en-v1.5",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-03-24T12:14:31Z" | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:20032
- loss:AttributeTripletLoss
base_model: Alibaba-NLP/gte-base-en-v1.5
widget:
- source_sentence: The Anger Workbook
sentences:
- Human Body
- isbn_13
- title
- '9781420909883'
- source_sentence: 30 June 1994
sentences:
- publication_date
- England
- 01 November 2005
- title
- source_sentence: 'Pub. Date: October 1992'
sentences:
- publication_date
- publisher
- 'Publisher: Thomson West'
- Delta (May 9, 2000)
- source_sentence: IVP Academic
sentences:
- isbn_13
- ': 9780465018802'
- Brazos Press
- publisher
- source_sentence: Master of the Game
sentences:
- publication_date
- title
- '1996'
- Rick Stein's Far Eastern Odyssey
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
- silhouette_cosine
- silhouette_euclidean
model-index:
- name: SentenceTransformer based on Alibaba-NLP/gte-base-en-v1.5
results:
- task:
type: triplet
name: Triplet
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy
value: 0.9851751923561096
name: Cosine Accuracy
- type: cosine_accuracy
value: 0.9846402406692505
name: Cosine Accuracy
- task:
type: silhouette
name: Silhouette
dataset:
name: Unknown
type: unknown
metrics:
- type: silhouette_cosine
value: 0.8033567667007446
name: Silhouette Cosine
- type: silhouette_euclidean
value: 0.6603978872299194
name: Silhouette Euclidean
- type: silhouette_cosine
value: 0.8031685948371887
name: Silhouette Cosine
- type: silhouette_euclidean
value: 0.6609576940536499
name: Silhouette Euclidean
---
# SentenceTransformer based on Alibaba-NLP/gte-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) <!-- at revision a829fd0e060bb84554da0dfd354d0de0f7712b7f -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("albertus-sussex/veriscrape-sbert-book-reference_8_to_verify_2-fold-5")
# Run inference
sentences = [
'Master of the Game',
"Rick Stein's Far Eastern Odyssey",
'1996',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.9852** |
#### Silhouette
* Evaluated with <code>veriscrape.training.SilhouetteEvaluator</code>
| Metric | Value |
|:----------------------|:-----------|
| **silhouette_cosine** | **0.8034** |
| silhouette_euclidean | 0.6604 |
#### Triplet
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.9846** |
#### Silhouette
* Evaluated with <code>veriscrape.training.SilhouetteEvaluator</code>
| Metric | Value |
|:----------------------|:-----------|
| **silhouette_cosine** | **0.8032** |
| silhouette_euclidean | 0.661 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 20,032 training samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative</code>, <code>pos_attr_name</code>, and <code>neg_attr_name</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative | pos_attr_name | neg_attr_name |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|
| type | string | string | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 7.44 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 7.62 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 8.1 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.86 tokens</li><li>max: 5 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.83 tokens</li><li>max: 5 tokens</li></ul> |
* Samples:
| anchor | positive | negative | pos_attr_name | neg_attr_name |
|:------------------------------|:--------------------------------------------------------------|:-------------------------------------------------------|:------------------------------|:------------------------------|
| <code>7/1/2000</code> | <code>08 November 1994</code> | <code>F My Life</code> | <code>publication_date</code> | <code>title</code> |
| <code>Pine Forge Press</code> | <code>Oxford University Press, USA (December 19, 1996)</code> | <code>Workman Publishing Company (1996)</code> | <code>publisher</code> | <code>publication_date</code> |
| <code>9781600242304</code> | <code>9780618033805</code> | <code>Scholastic Paperbacks (September 1, 2004)</code> | <code>isbn_13</code> | <code>publication_date</code> |
* Loss: <code>veriscrape.training.AttributeTripletLoss</code> with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 2,226 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative</code>, <code>pos_attr_name</code>, and <code>neg_attr_name</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative | pos_attr_name | neg_attr_name |
|:--------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------|:-------------------------------------------------------------------------------|
| type | string | string | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 7.4 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 7.66 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 8.12 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.7 tokens</li><li>max: 5 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.8 tokens</li><li>max: 5 tokens</li></ul> |
* Samples:
| anchor | positive | negative | pos_attr_name | neg_attr_name |
|:-------------------------------------------------------|:------------------------------|:--------------------------------------------|:--------------------|:-----------------------|
| <code>Drew Karpyshyn</code> | <code>Ernest Hemingway</code> | <code>9781616882914</code> | <code>author</code> | <code>isbn_13</code> |
| <code>Denene Millner</code> | <code>John Steinbeck</code> | <code>: Regnery Publishing</code> | <code>author</code> | <code>publisher</code> |
| <code>Colossians & Philemon: Preaching the Word</code> | <code>Express Makeup</code> | <code>: Zondervan Publishing Company</code> | <code>title</code> | <code>publisher</code> |
* Loss: <code>veriscrape.training.AttributeTripletLoss</code> with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | cosine_accuracy | silhouette_cosine |
|:-----:|:----:|:-------------:|:---------------:|:---------------:|:-----------------:|
| -1 | -1 | - | - | 0.3944 | 0.1168 |
| 1.0 | 157 | 1.0117 | 0.2685 | 0.9811 | 0.7923 |
| 2.0 | 314 | 0.1064 | 0.2295 | 0.9825 | 0.7871 |
| 3.0 | 471 | 0.081 | 0.1841 | 0.9865 | 0.7858 |
| 4.0 | 628 | 0.055 | 0.1513 | 0.9879 | 0.8258 |
| 5.0 | 785 | 0.042 | 0.1843 | 0.9852 | 0.8034 |
| -1 | -1 | - | - | 0.9846 | 0.8032 |
### Framework Versions
- Python: 3.10.16
- Sentence Transformers: 3.4.1
- Transformers: 4.45.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.5.2
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### AttributeTripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
mradermacher/ZEUS-8B-V8-GGUF | mradermacher | "2024-12-13T15:43:08Z" | 41 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:T145/ZEUS-8B-V8",
"base_model:quantized:T145/ZEUS-8B-V8",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-13T07:45:33Z" | ---
base_model: T145/ZEUS-8B-V8
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/T145/ZEUS-8B-V8
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ZEUS-8B-V8-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V8-GGUF/resolve/main/ZEUS-8B-V8.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V8-GGUF/resolve/main/ZEUS-8B-V8.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V8-GGUF/resolve/main/ZEUS-8B-V8.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V8-GGUF/resolve/main/ZEUS-8B-V8.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V8-GGUF/resolve/main/ZEUS-8B-V8.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V8-GGUF/resolve/main/ZEUS-8B-V8.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V8-GGUF/resolve/main/ZEUS-8B-V8.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V8-GGUF/resolve/main/ZEUS-8B-V8.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V8-GGUF/resolve/main/ZEUS-8B-V8.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V8-GGUF/resolve/main/ZEUS-8B-V8.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V8-GGUF/resolve/main/ZEUS-8B-V8.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V8-GGUF/resolve/main/ZEUS-8B-V8.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
harukadoyu/test | harukadoyu | "2024-05-13T07:42:44Z" | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | "2024-05-13T07:39:42Z" | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
Coldbrew9/wheel-SKFLY-P4 | Coldbrew9 | "2025-02-21T02:49:00Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-21T02:42:56Z" |
## 파인튜닝을 위한 데이터셋 구축 (앉아있는 자세)
```
dataset/
├── cloth/
│ ├── lower_img/
│ │ └── 00000.jpg # 하의 이미지
│ ├── lower_mask/
│ │ └── 00000.jpg # 하의 이미지의 마스크
│ ├── upper_img/
│ │ └── 00000.jpg # 하의 이미지
│ └── upper_mask/
│ └── 00000.jpg # 하의 이미지의 마스크
├── image/
│ └── 00000.jpg # 사람 이미지지
├── image_mask_L/ # 이미지의 하반신 마스크 저장 (Lower 부분)
│ └── 00000.jpg
└── image_mask_U/ # 이미지의 상반신 마스크 저장 (Upper 부분)
└── 00000.jpg
```
|
Pragash-Mohanarajah/bert-base-uncased-finetuned-bible | Pragash-Mohanarajah | "2024-05-07T17:30:47Z" | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:bible",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-05-07T15:58:58Z" | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- bible
model-index:
- name: bert-base-uncased-finetuned-bible
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-bible
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the bible dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3702
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6684 | 1.0 | 2341 | 1.4889 |
| 1.5534 | 2.0 | 4682 | 1.3957 |
| 1.5136 | 3.0 | 7023 | 1.3713 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
ttttttris/bert-finetuned-squad | ttttttris | "2025-02-04T01:06:30Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2025-02-03T20:49:03Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.48.2
- Pytorch 2.1.2+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
abdullahkhudhair/CE-categories-spanish_new | abdullahkhudhair | "2025-04-11T07:55:21Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-04-11T07:55:01Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
lesso04/342dda22-59fc-44c2-9823-9fcb1c4ebe2f | lesso04 | "2025-01-15T16:26:18Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NovaSearch/stella_en_1.5B_v5",
"base_model:adapter:NovaSearch/stella_en_1.5B_v5",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-15T16:13:04Z" | ---
library_name: peft
license: mit
base_model: dunzhang/stella_en_1.5B_v5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 342dda22-59fc-44c2-9823-9fcb1c4ebe2f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: dunzhang/stella_en_1.5B_v5
bf16: true
chat_template: llama3
datasets:
- data_files:
- c17c59740c4fc07c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c17c59740c4fc07c_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso04/342dda22-59fc-44c2-9823-9fcb1c4ebe2f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/c17c59740c4fc07c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 21989a9f-7539-4956-94ad-ee9bd5631eb8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 21989a9f-7539-4956-94ad-ee9bd5631eb8
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 342dda22-59fc-44c2-9823-9fcb1c4ebe2f
This model is a fine-tuned version of [dunzhang/stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0003 | 1 | nan |
| 0.0 | 0.0017 | 5 | nan |
| 0.0 | 0.0034 | 10 | nan |
| 0.0 | 0.0051 | 15 | nan |
| 0.0 | 0.0068 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
apple/aimv2-huge-patch14-448 | apple | "2025-02-28T18:31:12Z" | 2,848 | 3 | transformers | [
"transformers",
"jax",
"safetensors",
"aimv2",
"feature-extraction",
"vision",
"image-feature-extraction",
"mlx",
"pytorch",
"custom_code",
"arxiv:2411.14402",
"license:apple-amlr",
"model-index",
"region:us"
] | image-feature-extraction | "2024-10-29T15:38:36Z" | ---
library_name: transformers
license: apple-amlr
metrics:
- accuracy
pipeline_tag: image-feature-extraction
tags:
- vision
- image-feature-extraction
- mlx
- pytorch
model-index:
- name: aimv2-huge-patch14-448
results:
- task:
type: classification
name: Classification
dataset:
name: imagenet-1k
type: imagenet-1k
metrics:
- type: accuracy
value: 88.6
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: inaturalist-18
type: inaturalist-18
metrics:
- type: accuracy
value: 82.8
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: cifar10
type: cifar10
metrics:
- type: accuracy
value: 99.4
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: cifar100
type: cifar100
metrics:
- type: accuracy
value: 93.6
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: food101
type: food101
metrics:
- type: accuracy
value: 97.0
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: dtd
type: dtd
metrics:
- type: accuracy
value: 88.9
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: oxford-pets
type: oxford-pets
metrics:
- type: accuracy
value: 96.8
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: stanford-cars
type: stanford-cars
metrics:
- type: accuracy
value: 96.5
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: camelyon17
type: camelyon17
metrics:
- type: accuracy
value: 93.4
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: patch-camelyon
type: patch-camelyon
metrics:
- type: accuracy
value: 89.6
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: rxrx1
type: rxrx1
metrics:
- type: accuracy
value: 7.8
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: eurosat
type: eurosat
metrics:
- type: accuracy
value: 98.7
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: fmow
type: fmow
metrics:
- type: accuracy
value: 64.8
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: domainnet-infographic
type: domainnet-infographic
metrics:
- type: accuracy
value: 74.5
name: Accuracy
verified: false
---
# Introduction
[[`AIMv2 Paper`](https://arxiv.org/abs/2411.14402)] [[`BibTeX`](#citation)]
We introduce the AIMv2 family of vision models pre-trained with a multimodal autoregressive objective.
AIMv2 pre-training is simple and straightforward to train and scale effectively. Some AIMv2 highlights include:
1. Outperforms OAI CLIP and SigLIP on the majority of multimodal understanding benchmarks.
2. Outperforms DINOv2 on open-vocabulary object detection and referring expression comprehension.
3. Exhibits strong recognition performance with AIMv2-3B achieving *89.5% on ImageNet using a frozen trunk*.
<img src="aimv2_overview_light.png" alt="AIMv2 Overview"/>
## Usage
### PyTorch
```python
import requests
from PIL import Image
from transformers import AutoImageProcessor, AutoModel
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained(
"apple/aimv2-huge-patch14-448",
)
model = AutoModel.from_pretrained(
"apple/aimv2-huge-patch14-448",
trust_remote_code=True,
)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
```
### JAX
```python
import requests
from PIL import Image
from transformers import AutoImageProcessor, FlaxAutoModel
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained(
"apple/aimv2-huge-patch14-448",
)
model = FlaxAutoModel.from_pretrained(
"apple/aimv2-huge-patch14-448",
trust_remote_code=True,
)
inputs = processor(images=image, return_tensors="jax")
outputs = model(**inputs)
```
## Citation
If you find our work useful, please consider citing us as:
```bibtex
@misc{fini2024multimodalautoregressivepretraininglarge,
author = {Fini, Enrico and Shukor, Mustafa and Li, Xiujun and Dufter, Philipp and Klein, Michal and Haldimann, David and Aitharaju, Sai and da Costa, Victor Guilherme Turrisi and Béthune, Louis and Gan, Zhe and Toshev, Alexander T and Eichner, Marcin and Nabi, Moin and Yang, Yinfei and Susskind, Joshua M. and El-Nouby, Alaaeldin},
url = {https://arxiv.org/abs/2411.14402},
eprint = {2411.14402},
eprintclass = {cs.CV},
eprinttype = {arXiv},
title = {Multimodal Autoregressive Pre-training of Large Vision Encoders},
year = {2024},
}
```
|
AniketArtani/final_testing | AniketArtani | "2024-04-04T07:54:18Z" | 103 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-04-02T13:44:32Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: facebook/bart-base
model-index:
- name: final_testing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# final_testing
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.01
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.18.0
- Tokenizers 0.15.1
|
macadeliccc/samantha-1.1-MBX-7B | macadeliccc | "2024-02-20T16:57:09Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:cognitivecomputations/samantha-1.1-westlake-7b",
"base_model:merge:cognitivecomputations/samantha-1.1-westlake-7b",
"base_model:macadeliccc/MBX-7B-v3-DPO",
"base_model:merge:macadeliccc/MBX-7B-v3-DPO",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-13T20:24:52Z" | ---
base_model:
- macadeliccc/MBX-7B-v3-DPO
- cognitivecomputations/samantha-1.1-westlake-7b
tags:
- mergekit
- merge
license: apache-2.0
---
# Samantha 1.1 MBX 7B

**Prompt Template:**
```
<|im_start|>system
You are a caring and empathetic sentient AI companion named Samantha.<|im_end|>
<|im_start|>user
Hello, what is your name?<|im_end|>
<|im_start|>assistant
```
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [macadeliccc/MBX-7B-v3-DPO](https://huggingface.co/macadeliccc/MBX-7B-v3-DPO)
* [cognitivecomputations/samantha-1.1-westlake-7b](https://huggingface.co/cognitivecomputations/samantha-1.1-westlake-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: cognitivecomputations/samantha-1.1-westlake-7b
layer_range: [0, 32]
- model: macadeliccc/MBX-7B-v3-DPO
layer_range: [0, 32]
merge_method: slerp
base_model: macadeliccc/MBX-7B-v3-DPO
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## GGUF
TODO
## Ollama
```bash
ollama run macadeliccc/samantha-1.1-westlake-7b
```
## Code Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("macadeliccc/samantha-1.1-MBX-7B")
model = AutoModelForCausalLM.from_pretrained("macadeliccc/samanth-1.1-MBX-7B")
messages = [
{"role": "system", "content": "You are a caring and empathetic sentient AI companion named Samantha."},
{"role": "user", "content": "Hello, what is your name?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
```
|
isspek/xlnet-base-cased_ebola_mistral_4_2e-5_16_undersampling_0.3 | isspek | "2024-12-19T13:20:03Z" | 120 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-23T12:39:19Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Antoinegg1/llama-3-8b_safe_0.5to0.25_1 | Antoinegg1 | "2024-06-08T01:53:10Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-07T22:56:52Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CuckmeisterFuller/Mistral-Small-24B-Instruct-2501-bf16-Q2-mlx | CuckmeisterFuller | "2025-01-31T02:11:59Z" | 37 | 0 | vllm | [
"vllm",
"safetensors",
"mistral",
"mlx",
"mlx-my-repo",
"en",
"fr",
"de",
"es",
"it",
"pt",
"zh",
"ja",
"ru",
"ko",
"base_model:mlx-community/Mistral-Small-24B-Instruct-2501-bf16",
"base_model:quantized:mlx-community/Mistral-Small-24B-Instruct-2501-bf16",
"license:apache-2.0",
"2-bit",
"region:us"
] | null | "2025-01-31T02:11:34Z" | ---
language:
- en
- fr
- de
- es
- it
- pt
- zh
- ja
- ru
- ko
license: apache-2.0
library_name: vllm
base_model: mlx-community/Mistral-Small-24B-Instruct-2501-bf16
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
tags:
- mlx
- mlx
- mlx-my-repo
---
# CuckmeisterFuller/Mistral-Small-24B-Instruct-2501-bf16-Q2-mlx
The Model [CuckmeisterFuller/Mistral-Small-24B-Instruct-2501-bf16-Q2-mlx](https://huggingface.co/CuckmeisterFuller/Mistral-Small-24B-Instruct-2501-bf16-Q2-mlx) was converted to MLX format from [mlx-community/Mistral-Small-24B-Instruct-2501-bf16](https://huggingface.co/mlx-community/Mistral-Small-24B-Instruct-2501-bf16) using mlx-lm version **0.20.5**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("CuckmeisterFuller/Mistral-Small-24B-Instruct-2501-bf16-Q2-mlx")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
zelk12/MT5-Max-Merge_02012025163610-BMA-gemma-2-9B | zelk12 | "2025-01-14T15:29:38Z" | 14 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT5-Max-Merge_02012025163610-BB-gemma-2-MTM4MTg2GI-9B",
"base_model:merge:zelk12/MT5-Max-Merge_02012025163610-BB-gemma-2-MTM4MTg2GI-9B",
"base_model:zelk12/MT5-Max-Merge_02012025163610-MA-gemma-2-MTM4MTM3-9B",
"base_model:merge:zelk12/MT5-Max-Merge_02012025163610-MA-gemma-2-MTM4MTM3-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-14T15:23:16Z" | ---
base_model:
- zelk12/MT5-Max-Merge_02012025163610-BB-gemma-2-MTM4MTg2GI-9B
- zelk12/MT5-Max-Merge_02012025163610-MA-gemma-2-MTM4MTM3-9B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT5-Max-Merge_02012025163610-BB-gemma-2-MTM4MTg2GI-9B](https://huggingface.co/zelk12/MT5-Max-Merge_02012025163610-BB-gemma-2-MTM4MTg2GI-9B)
* [zelk12/MT5-Max-Merge_02012025163610-MA-gemma-2-MTM4MTM3-9B](https://huggingface.co/zelk12/MT5-Max-Merge_02012025163610-MA-gemma-2-MTM4MTM3-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT5-Max-Merge_02012025163610-BB-gemma-2-MTM4MTg2GI-9B
- model: zelk12/MT5-Max-Merge_02012025163610-MA-gemma-2-MTM4MTM3-9B
merge_method: slerp
base_model: zelk12/MT5-Max-Merge_02012025163610-BB-gemma-2-MTM4MTg2GI-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
eio24/DP_Czert_fine-tuned | eio24 | "2024-04-07T18:52:30Z" | 105 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-07T18:52:15Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CatBarks/t5_esSEC4_4_tokenizer | CatBarks | "2024-02-29T08:28:10Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-02-29T08:28:09Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dabrown/cc19fb13-0204-430d-96f4-5e344edaab60 | dabrown | "2025-02-28T11:02:33Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"region:us"
] | null | "2025-02-28T06:35:26Z" | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3-mini-128k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cc19fb13-0204-430d-96f4-5e344edaab60
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.5.2`
```yaml
adapter: lora
base_model: microsoft/Phi-3-mini-128k-instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2ab5508654347f3c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2ab5508654347f3c_train_data.json
type:
field_instruction: user_prompt
field_output: resp
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: false
group_by_length: true
hub_model_id: dabrown/cc19fb13-0204-430d-96f4-5e344edaab60
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: false
lora_inference_mode: true
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/2ab5508654347f3c_train_data.json
model_type: AutoModelForCausalLM
modules_to_save: lm_head
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
peft_use_rslora: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: offline
wandb_name: 946903bb-e331-4529-be0a-a81d1c829510
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 946903bb-e331-4529-be0a-a81d1c829510
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# cc19fb13-0204-430d-96f4-5e344edaab60
This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 1081
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6039 | 0.0009 | 1 | 0.8865 |
| 0.1576 | 0.2508 | 271 | 0.1417 |
| 0.0507 | 0.5017 | 542 | 0.1345 |
| 0.1386 | 0.7525 | 813 | 0.1234 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.3.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3 |
eglkan1/mBART-TextSimp-LT-BatchSize8-lr5e-5 | eglkan1 | "2024-04-11T10:44:02Z" | 10 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/mbart-large-50",
"base_model:finetune:facebook/mbart-large-50",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-04-11T10:05:52Z" | ---
license: mit
base_model: facebook/mbart-large-50
tags:
- generated_from_trainer
metrics:
- rouge
- sacrebleu
model-index:
- name: mBART-TextSimp-LT-BatchSize8-lr5e-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBART-TextSimp-LT-BatchSize8-lr5e-5
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4296
- Rouge1: 0.0605
- Rouge2: 0.0078
- Rougel: 0.0593
- Sacrebleu: 0.044
- Gen Len: 34.5776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Sacrebleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:--------:|
| 8.0008 | 1.0 | 104 | 7.0565 | 0.1958 | 0.1282 | 0.1868 | 7.9463 | 511.6945 |
| 0.3454 | 2.0 | 209 | 0.1874 | 0.6646 | 0.4862 | 0.6559 | 41.0808 | 34.5752 |
| 0.0728 | 3.0 | 313 | 0.0748 | 0.7063 | 0.5426 | 0.6984 | 48.033 | 34.5752 |
| 0.0491 | 4.0 | 418 | 0.0630 | 0.7346 | 0.5861 | 0.7248 | 51.6574 | 34.5752 |
| 0.755 | 5.0 | 522 | 0.7158 | 0.0008 | 0.0 | 0.0009 | 0.0 | 35.5752 |
| 0.4913 | 6.0 | 627 | 0.4653 | 0.0218 | 0.0008 | 0.0219 | 0.022 | 34.6134 |
| 0.4771 | 7.0 | 731 | 0.4525 | 0.0385 | 0.0034 | 0.0382 | 0.0308 | 34.926 |
| 0.4224 | 7.96 | 832 | 0.4296 | 0.0605 | 0.0078 | 0.0593 | 0.044 | 34.5776 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.4
- Tokenizers 0.13.3
|
sniperfix/7a275ad6-d423-4b47-aeb0-d87b2bb38d8c | sniperfix | "2025-01-31T21:25:00Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-3B",
"base_model:adapter:unsloth/Llama-3.2-3B",
"license:llama3.2",
"region:us"
] | null | "2025-01-31T20:17:38Z" | ---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7a275ad6-d423-4b47-aeb0-d87b2bb38d8c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-3B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7a30332c2a7b7854_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7a30332c2a7b7854_train_data.json
type:
field_input: options
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 256
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 32
gradient_checkpointing: true
group_by_length: false
hub_model_id: sniperfix/7a275ad6-d423-4b47-aeb0-d87b2bb38d8c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- down_proj
- up_proj
lr_scheduler: cosine
max_grad_norm: 2
max_steps: 90
micro_batch_size: 2
mlflow_experiment_name: /tmp/7a30332c2a7b7854_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1.0e-05
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: indexjupri-sniper-country
wandb_mode: online
wandb_name: e7520f20-550c-418e-944f-52ac3fc9cfcf
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e7520f20-550c-418e-944f-52ac3fc9cfcf
warmup_steps: 20
weight_decay: 0.02
xformers_attention: false
```
</details><br>
# 7a275ad6-d423-4b47-aeb0-d87b2bb38d8c
This model is a fine-tuned version of [unsloth/Llama-3.2-3B](https://huggingface.co/unsloth/Llama-3.2-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 90
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0053 | 1 | nan |
| 0.0 | 0.0426 | 8 | nan |
| 0.0 | 0.0851 | 16 | nan |
| 0.0 | 0.1277 | 24 | nan |
| 0.0 | 0.1703 | 32 | nan |
| 0.0 | 0.2129 | 40 | nan |
| 0.0 | 0.2554 | 48 | nan |
| 0.0 | 0.2980 | 56 | nan |
| 0.0 | 0.3406 | 64 | nan |
| 0.0 | 0.3832 | 72 | nan |
| 0.0 | 0.4257 | 80 | nan |
| 0.0 | 0.4683 | 88 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kaist-ai/CoT-T5-3B | kaist-ai | "2023-10-14T14:42:55Z" | 21 | 11 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:kaist-ai/CoT-Collection",
"dataset:SirNeural/flan_v2",
"arxiv:2305.14045",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-06-26T16:57:54Z" | ---
tags:
- text2text-generation
datasets:
- kaist-ai/CoT-Collection
- SirNeural/flan_v2
license: apache-2.0
language:
- en
pipeline_tag: text2text-generation
library_name: transformers
---
## Links for Reference
- **Homepage:https://github.com/kaistAI/CoT-Collection**
- **Repository:https://github.com/kaistAI/CoT-Collection**
- **Paper:https://arxiv.org/abs/2305.14045**
- **Point of Contact:[email protected]**
# TL;DR
CoT-T5 is a language model using [Flan-T5](https://huggingface.co/google/flan-t5-xxl) as a base model, and CoT fine-tuned on 1.84 million rationales across 1,060 tasks from the [CoT Collection](https://huggingface.co/datasets/kaist-ai/CoT-Collection).
Since it was CoT fine-tuned on a large amount of rationales, it shows superior performance with CoT compared to Flan-T5.
One could use CoT-T5 for (1) Solving unseen tasks in zero-shot setting, and (2) Adapting to new tasks with CoT fine-tuning.
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Related Models:** [All CoT-T5 Checkpoints](https://huggingface.co/models?search=cot-t5)
- **Resources for more information:**
- [Research paper](https://arxiv.org/abs/2305.14045)
- [GitHub Repo](https://github.com/kaistAI/CoT-Collection)
CoT-T5 is trained with two different sizes (3B and 11B).
You could check the 11B sized LM on [this page](https://huggingface.co/kaist-ai/CoT-T5-3B).
Also, check out our dataset as well on [this page](https://huggingface.co/datasets/kaist-ai/CoT-Collection).
## License
CoT Collection and CoT-T5 is subject to OpenAI's Terms of Use for the generated data. If you suspect any violations, please reach out to us.
# Usage
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("kaist-ai/CoT-T5-3B")
model = T5ForConditionalGeneration.from_pretrained("kaist-ai/CoT-T5-3B")
input_text = "Read the Directions and try to pick among A,B,C,D.\n\nDirecitons: A good way to figure out the relationship in a given question is to make up a sentence that describes the relationship between the first two words. Then, try to use the same sentence to find out which of the answer choices completes the same relationship with the third word.\nQuestion: Odometer is to mileage as compass is to?\nOptions: (A) speed, (B) hiking, (C) needle, (D) direction.\nLet's think step by step.\n"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("kaist-ai/CoT-T5-3B")
model = T5ForConditionalGeneration.from_pretrained("kaist-ai/CoT-T5-3B", device_map="auto")
input_text = "Read the Directions and try to pick among A,B,C,D.\n\nDirecitons: A good way to figure out the relationship in a given question is to make up a sentence that describes the relationship between the first two words. Then, try to use the same sentence to find out which of the answer choices completes the same relationship with the third word.\nQuestion: Odometer is to mileage as compass is to?\nOptions: (A) speed, (B) hiking, (C) needle, (D) direction.\nLet's think step by step.\n"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU using different precisions
#### FP16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("kaist-ai/CoT-T5-3B")
model = T5ForConditionalGeneration.from_pretrained("kaist-ai/CoT-T5-3B", device_map="auto", torch_dtype=torch.float16)
input_text = "Read the Directions and try to pick among A,B,C,D.\n\nDirecitons: A good way to figure out the relationship in a given question is to make up a sentence that describes the relationship between the first two words. Then, try to use the same sentence to find out which of the answer choices completes the same relationship with the third word.\nQuestion: Odometer is to mileage as compass is to?\nOptions: (A) speed, (B) hiking, (C) needle, (D) direction.\nLet's think step by step.\n"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
#### INT8
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("kaist-ai/CoT-T5-3B")
model = T5ForConditionalGeneration.from_pretrained("kaist-ai/CoT-T5-3B", device_map="auto", load_in_8bit=True)
input_text = "Read the Directions and try to pick among A,B,C,D.\n\nDirecitons: A good way to figure out the relationship in a given question is to make up a sentence that describes the relationship between the first two words. Then, try to use the same sentence to find out which of the answer choices completes the same relationship with the third word.\nQuestion: Odometer is to mileage as compass is to?\nOptions: (A) speed, (B) hiking, (C) needle, (D) direction.\nLet's think step by step.\n"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
# Citation
If you find the following model helpful, please considering citing our paper!
**BibTeX:**
```bibtex
@article{kim2023cot,
title={The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning},
author={Kim, Seungone and Joo, Se June and Kim, Doyoung and Jang, Joel and Ye, Seonghyeon and Shin, Jamin and Seo, Minjoon},
journal={arXiv preprint arXiv:2305.14045},
year={2023}
}
``` |
geoffwalters/finetuned2_distilgpt2 | geoffwalters | "2025-04-02T13:14:58Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-02T12:52:48Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Aivesa/3b45a5fe-37d5-4baa-91e1-1c813f38b31a | Aivesa | "2025-01-18T17:10:14Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"dataset:Aivesa/dataset_7c40032f-e667-40ad-9658-3748512bf15b",
"base_model:unsloth/Qwen2-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-01-18T17:08:57Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
datasets:
- Aivesa/dataset_7c40032f-e667-40ad-9658-3748512bf15b
model-index:
- name: 3b45a5fe-37d5-4baa-91e1-1c813f38b31a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.6.0`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: /workspace/axolotl/data/prepared
datasets:
- ds_type: json
format: custom
path: Aivesa/dataset_7c40032f-e667-40ad-9658-3748512bf15b
type:
field_instruction: sentence1
field_output: sentence2
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Aivesa/3b45a5fe-37d5-4baa-91e1-1c813f38b31a
hub_private_repo: true
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: /workspace/axolotl/outputs
pad_to_sequence_len: true
push_to_hub: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_safetensors: true
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
use_accelerate: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7c40032f-e667-40ad-9658-3748512bf15b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7c40032f-e667-40ad-9658-3748512bf15b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3b45a5fe-37d5-4baa-91e1-1c813f38b31a
This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct) on the Aivesa/dataset_7c40032f-e667-40ad-9658-3748512bf15b dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1446
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.7803 | 0.0030 | 3 | 3.5993 |
| 2.9619 | 0.0060 | 6 | 3.4711 |
| 3.3801 | 0.0090 | 9 | 3.1446 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.1
- Pytorch 2.5.0a0+e000cf0ad9.nv24.10
- Datasets 3.1.0
- Tokenizers 0.21.0 |
mergekit-community/TopEvolution | mergekit-community | "2024-05-20T21:01:09Z" | 12 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:merge:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:mergekit-community/mergekit-slerp-ebgdloh",
"base_model:merge:mergekit-community/mergekit-slerp-ebgdloh",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-20T20:53:23Z" | ---
base_model:
- NousResearch/Hermes-2-Pro-Mistral-7B
- mergekit-community/mergekit-slerp-ebgdloh
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
* [mergekit-community/mergekit-slerp-ebgdloh](https://huggingface.co/mergekit-community/mergekit-slerp-ebgdloh)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NousResearch/Hermes-2-Pro-Mistral-7B
- model: mergekit-community/mergekit-slerp-ebgdloh
merge_method: slerp
base_model: mergekit-community/mergekit-slerp-ebgdloh
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
Weni/WeniGPT-2.8.1-Zephyr-7B-zephyr-prompt-binarized | Weni | "2024-03-08T15:49:25Z" | 0 | 0 | trl | [
"trl",
"safetensors",
"DPO",
"WeniGPT",
"en",
"base_model:Weni/WeniGPT-2.2.3-Zephyr-7B-merged-LLM_Base_2.0.3_SFT",
"base_model:finetune:Weni/WeniGPT-2.2.3-Zephyr-7B-merged-LLM_Base_2.0.3_SFT",
"license:mit",
"region:us"
] | null | "2024-03-08T15:16:59Z" | ---
license: mit
library_name: "trl"
tags:
- DPO
- WeniGPT
base_model: Weni/WeniGPT-2.2.3-Zephyr-7B-merged-LLM_Base_2.0.3_SFT
model-index:
- name: Weni/WeniGPT-2.8.1-Zephyr-7B-zephyr-prompt-binarized
results: []
language: ['en']
---
# Weni/WeniGPT-2.8.1-Zephyr-7B-zephyr-prompt-binarized
This model is a fine-tuned version of [Weni/WeniGPT-2.2.3-Zephyr-7B-merged-LLM_Base_2.0.3_SFT] on the dataset HuggingFaceH4/ultrafeedback_binarized with the DPO trainer. It is part of the WeniGPT project for [Weni](https://weni.ai/).
It achieves the following results on the evaluation set:
{'eval_loss': 1.9671216011047363, 'eval_runtime': 94.0811, 'eval_samples_per_second': 2.126, 'eval_steps_per_second': 0.531, 'eval_rewards/chosen': 16.395244598388672, 'eval_rewards/rejected': 11.052546501159668, 'eval_rewards/accuracies': 0.5299999713897705, 'eval_rewards/margins': 5.342697620391846, 'eval_logps/rejected': -302.33038330078125, 'eval_logps/chosen': -315.1849365234375, 'eval_logits/rejected': -2.665374517440796, 'eval_logits/chosen': -2.6737234592437744, 'epoch': 1.0}
## Intended uses & limitations
This model has not been trained to avoid specific intructions.
## Training procedure
Finetuning was done on the model Weni/WeniGPT-2.2.3-Zephyr-7B-merged-LLM_Base_2.0.3_SFT with the following prompt:
```
Prompt:
<|user|>{prompt}</s>
Chosen:
<|assistant|>{chosen}</s>
Rejected:
<|assistant|>{rejected}</s>
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- per_device_train_batch_size: 4
- per_device_eval_batch_size: 4
- gradient_accumulation_steps: 4
- num_gpus: 1
- total_train_batch_size: 16
- optimizer: AdamW
- lr_scheduler_type: cosine
- num_steps: 112
- quantization_type: bitsandbytes
- LoRA: ("\n - bits: 4\n - use_exllama: True\n - device_map: auto\n - use_cache: False\n - lora_r: 16\n - lora_alpha: 16\n - lora_dropout: 0.05\n - bias: none\n - target_modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj']\n - task_type: CAUSAL_LM",)
### Training results
### Framework versions
- transformers==4.38.2
- datasets==2.17.1
- peft==0.8.2
- safetensors==0.4.2
- evaluate==0.4.1
- bitsandbytes==0.42
- huggingface_hub==0.20.3
- seqeval==1.2.2
- optimum==1.17.1
- auto-gptq==0.7.0
- gpustat==1.1.1
- deepspeed==0.13.2
- wandb==0.16.3
- trl==0.7.11
- accelerate==0.27.2
- coloredlogs==15.0.1
- traitlets==5.14.1
- autoawq@https://github.com/casper-hansen/AutoAWQ/releases/download/v0.2.0/autoawq-0.2.0+cu118-cp310-cp310-linux_x86_64.whl
### Hardware
- Cloud provided: runpod.io
|
libok/test | libok | "2022-11-10T06:57:12Z" | 0 | 0 | null | [
"region:us"
] | null | "2022-11-10T06:56:42Z" | a robot reading the book and playing the piano |
asun17904/multiberts-seed_1_winobias_classifieronly | asun17904 | "2023-03-24T16:00:02Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-03-24T03:11:15Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: multiberts-seed_1_winobias_classifieronly
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multiberts-seed_1_winobias_classifieronly
This model is a fine-tuned version of [google/multiberts-seed_1](https://huggingface.co/google/multiberts-seed_1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6936
- Accuracy: 0.5114
- Tp: 0.2734
- Tn: 0.2380
- Fp: 0.2620
- Fn: 0.2266
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Tp | Tn | Fp | Fn |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:------:|:------:|
| 0.7029 | 0.8 | 20 | 0.6948 | 0.5019 | 0.1951 | 0.3068 | 0.1932 | 0.3049 |
| 0.6937 | 1.6 | 40 | 0.6952 | 0.4931 | 0.3390 | 0.1540 | 0.3460 | 0.1610 |
| 0.6974 | 2.4 | 60 | 0.6954 | 0.4937 | 0.3567 | 0.1370 | 0.3630 | 0.1433 |
| 0.7041 | 3.2 | 80 | 0.6946 | 0.5051 | 0.2191 | 0.2860 | 0.2140 | 0.2809 |
| 0.6975 | 4.0 | 100 | 0.6947 | 0.5013 | 0.1799 | 0.3213 | 0.1787 | 0.3201 |
| 0.6996 | 4.8 | 120 | 0.6948 | 0.5025 | 0.1521 | 0.3504 | 0.1496 | 0.3479 |
| 0.7008 | 5.6 | 140 | 0.6944 | 0.4975 | 0.2841 | 0.2134 | 0.2866 | 0.2159 |
| 0.7004 | 6.4 | 160 | 0.6943 | 0.4968 | 0.1850 | 0.3119 | 0.1881 | 0.3150 |
| 0.6913 | 7.2 | 180 | 0.6944 | 0.4924 | 0.1553 | 0.3371 | 0.1629 | 0.3447 |
| 0.703 | 8.0 | 200 | 0.6941 | 0.5025 | 0.2784 | 0.2241 | 0.2759 | 0.2216 |
| 0.6975 | 8.8 | 220 | 0.6941 | 0.4987 | 0.2197 | 0.2790 | 0.2210 | 0.2803 |
| 0.6964 | 9.6 | 240 | 0.6942 | 0.4949 | 0.2058 | 0.2891 | 0.2109 | 0.2942 |
| 0.692 | 10.4 | 260 | 0.6943 | 0.4949 | 0.3037 | 0.1913 | 0.3087 | 0.1963 |
| 0.6939 | 11.2 | 280 | 0.6943 | 0.4987 | 0.1900 | 0.3087 | 0.1913 | 0.3100 |
| 0.7043 | 12.0 | 300 | 0.6942 | 0.5044 | 0.2551 | 0.2494 | 0.2506 | 0.2449 |
| 0.7036 | 12.8 | 320 | 0.6942 | 0.4912 | 0.2102 | 0.2809 | 0.2191 | 0.2898 |
| 0.697 | 13.6 | 340 | 0.6943 | 0.4975 | 0.1604 | 0.3371 | 0.1629 | 0.3396 |
| 0.7028 | 14.4 | 360 | 0.6950 | 0.5032 | 0.3939 | 0.1092 | 0.3908 | 0.1061 |
| 0.7012 | 15.2 | 380 | 0.6940 | 0.4962 | 0.2045 | 0.2917 | 0.2083 | 0.2955 |
| 0.6976 | 16.0 | 400 | 0.6940 | 0.4968 | 0.2102 | 0.2866 | 0.2134 | 0.2898 |
| 0.695 | 16.8 | 420 | 0.6944 | 0.5095 | 0.1452 | 0.3643 | 0.1357 | 0.3548 |
| 0.6985 | 17.6 | 440 | 0.6939 | 0.5013 | 0.2210 | 0.2803 | 0.2197 | 0.2790 |
| 0.6946 | 18.4 | 460 | 0.6939 | 0.5032 | 0.2765 | 0.2266 | 0.2734 | 0.2235 |
| 0.6975 | 19.2 | 480 | 0.6940 | 0.4962 | 0.1749 | 0.3213 | 0.1787 | 0.3251 |
| 0.6958 | 20.0 | 500 | 0.6939 | 0.4905 | 0.2058 | 0.2847 | 0.2153 | 0.2942 |
| 0.6947 | 20.8 | 520 | 0.6938 | 0.5057 | 0.2771 | 0.2285 | 0.2715 | 0.2229 |
| 0.7044 | 21.6 | 540 | 0.6940 | 0.5019 | 0.2986 | 0.2033 | 0.2967 | 0.2014 |
| 0.698 | 22.4 | 560 | 0.6941 | 0.4918 | 0.3201 | 0.1717 | 0.3283 | 0.1799 |
| 0.7016 | 23.2 | 580 | 0.6939 | 0.5076 | 0.2771 | 0.2304 | 0.2696 | 0.2229 |
| 0.7029 | 24.0 | 600 | 0.6939 | 0.5063 | 0.2765 | 0.2298 | 0.2702 | 0.2235 |
| 0.6975 | 24.8 | 620 | 0.6938 | 0.5025 | 0.2904 | 0.2121 | 0.2879 | 0.2096 |
| 0.6966 | 25.6 | 640 | 0.6940 | 0.5032 | 0.1660 | 0.3371 | 0.1629 | 0.3340 |
| 0.6974 | 26.4 | 660 | 0.6938 | 0.4994 | 0.1926 | 0.3068 | 0.1932 | 0.3074 |
| 0.6998 | 27.2 | 680 | 0.6938 | 0.5013 | 0.2229 | 0.2784 | 0.2216 | 0.2771 |
| 0.6899 | 28.0 | 700 | 0.6937 | 0.5082 | 0.25 | 0.2582 | 0.2418 | 0.25 |
| 0.6954 | 28.8 | 720 | 0.6937 | 0.4968 | 0.2109 | 0.2860 | 0.2140 | 0.2891 |
| 0.6926 | 29.6 | 740 | 0.6941 | 0.4899 | 0.3479 | 0.1420 | 0.3580 | 0.1521 |
| 0.6936 | 30.4 | 760 | 0.6938 | 0.5006 | 0.2822 | 0.2184 | 0.2816 | 0.2178 |
| 0.6911 | 31.2 | 780 | 0.6937 | 0.5057 | 0.2519 | 0.2538 | 0.2462 | 0.2481 |
| 0.69 | 32.0 | 800 | 0.6938 | 0.5038 | 0.2904 | 0.2134 | 0.2866 | 0.2096 |
| 0.6953 | 32.8 | 820 | 0.6937 | 0.5051 | 0.2765 | 0.2285 | 0.2715 | 0.2235 |
| 0.6971 | 33.6 | 840 | 0.6937 | 0.4956 | 0.2020 | 0.2936 | 0.2064 | 0.2980 |
| 0.6983 | 34.4 | 860 | 0.6937 | 0.5025 | 0.2727 | 0.2298 | 0.2702 | 0.2273 |
| 0.698 | 35.2 | 880 | 0.6938 | 0.4987 | 0.3024 | 0.1963 | 0.3037 | 0.1976 |
| 0.6949 | 36.0 | 900 | 0.6938 | 0.5032 | 0.3081 | 0.1951 | 0.3049 | 0.1919 |
| 0.6969 | 36.8 | 920 | 0.6937 | 0.5082 | 0.2885 | 0.2197 | 0.2803 | 0.2115 |
| 0.6978 | 37.6 | 940 | 0.6937 | 0.5088 | 0.3087 | 0.2001 | 0.2999 | 0.1913 |
| 0.6965 | 38.4 | 960 | 0.6936 | 0.5088 | 0.2588 | 0.25 | 0.25 | 0.2412 |
| 0.6929 | 39.2 | 980 | 0.6936 | 0.5101 | 0.2620 | 0.2481 | 0.2519 | 0.2380 |
| 0.6967 | 40.0 | 1000 | 0.6936 | 0.5101 | 0.2702 | 0.2399 | 0.2601 | 0.2298 |
| 0.6971 | 40.8 | 1020 | 0.6936 | 0.5069 | 0.2431 | 0.2639 | 0.2361 | 0.2569 |
| 0.6976 | 41.6 | 1040 | 0.6936 | 0.5063 | 0.2418 | 0.2645 | 0.2355 | 0.2582 |
| 0.6989 | 42.4 | 1060 | 0.6936 | 0.5038 | 0.2304 | 0.2734 | 0.2266 | 0.2696 |
| 0.6995 | 43.2 | 1080 | 0.6936 | 0.5019 | 0.2254 | 0.2765 | 0.2235 | 0.2746 |
| 0.6981 | 44.0 | 1100 | 0.6936 | 0.5069 | 0.2386 | 0.2683 | 0.2317 | 0.2614 |
| 0.6914 | 44.8 | 1120 | 0.6936 | 0.5095 | 0.25 | 0.2595 | 0.2405 | 0.25 |
| 0.6936 | 45.6 | 1140 | 0.6936 | 0.5095 | 0.25 | 0.2595 | 0.2405 | 0.25 |
| 0.6951 | 46.4 | 1160 | 0.6936 | 0.5107 | 0.2734 | 0.2374 | 0.2626 | 0.2266 |
| 0.6964 | 47.2 | 1180 | 0.6936 | 0.5114 | 0.2854 | 0.2260 | 0.2740 | 0.2146 |
| 0.7004 | 48.0 | 1200 | 0.6936 | 0.5114 | 0.2822 | 0.2292 | 0.2708 | 0.2178 |
| 0.696 | 48.8 | 1220 | 0.6936 | 0.5088 | 0.2759 | 0.2330 | 0.2670 | 0.2241 |
| 0.6966 | 49.6 | 1240 | 0.6936 | 0.5114 | 0.2734 | 0.2380 | 0.2620 | 0.2266 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.10.1
- Tokenizers 0.13.2
|
bikashpatra/flux-test-3 | bikashpatra | "2024-08-21T00:13:37Z" | 5 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-08-20T23:51:22Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
instance_prompt: INKU
---
# Flux Test 3
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `INKU` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('bikashpatra/flux-test-3', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
raraujo/bert-finetuned-ner | raraujo | "2025-03-12T14:14:15Z" | 44 | 0 | transformers | [
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-12-06T01:28:46Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: raraujo/bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# raraujo/bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0688
- Validation Loss: 0.0660
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 210, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': np.float32(0.9), 'beta_2': np.float32(0.999), 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.5187 | 0.1266 | 0 |
| 0.1175 | 0.0574 | 1 |
| 0.0798 | 0.0548 | 2 |
| 0.0688 | 0.0660 | 3 |
### Framework versions
- Transformers 4.47.0
- TensorFlow 2.18.0
- Datasets 3.2.0
- Tokenizers 0.21.0
|
apwic/sentiment-lora-r2a1d0.1-0 | apwic | "2024-05-17T15:22:24Z" | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | "2024-05-17T14:49:10Z" | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: sentiment-lora-r2a1d0.1-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-lora-r2a1d0.1-0
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3608
- Accuracy: 0.8471
- Precision: 0.8138
- Recall: 0.8243
- F1: 0.8187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 30
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5634 | 1.0 | 122 | 0.5108 | 0.7193 | 0.6572 | 0.6489 | 0.6524 |
| 0.5081 | 2.0 | 244 | 0.5049 | 0.7218 | 0.6829 | 0.7082 | 0.6888 |
| 0.4924 | 3.0 | 366 | 0.4667 | 0.7494 | 0.6977 | 0.6977 | 0.6977 |
| 0.4698 | 4.0 | 488 | 0.4392 | 0.7794 | 0.7349 | 0.7114 | 0.7207 |
| 0.4519 | 5.0 | 610 | 0.4548 | 0.7469 | 0.7169 | 0.7534 | 0.7226 |
| 0.4356 | 6.0 | 732 | 0.4111 | 0.8145 | 0.7770 | 0.7713 | 0.7740 |
| 0.421 | 7.0 | 854 | 0.4101 | 0.7945 | 0.7538 | 0.7721 | 0.7612 |
| 0.4039 | 8.0 | 976 | 0.3829 | 0.8296 | 0.7949 | 0.7919 | 0.7934 |
| 0.3887 | 9.0 | 1098 | 0.3800 | 0.8321 | 0.7972 | 0.7987 | 0.7979 |
| 0.3797 | 10.0 | 1220 | 0.3768 | 0.8371 | 0.8044 | 0.7997 | 0.8020 |
| 0.368 | 11.0 | 1342 | 0.3842 | 0.8221 | 0.7846 | 0.8016 | 0.7918 |
| 0.3598 | 12.0 | 1464 | 0.3778 | 0.8271 | 0.7902 | 0.8051 | 0.7968 |
| 0.3548 | 13.0 | 1586 | 0.3624 | 0.8471 | 0.8167 | 0.8118 | 0.8142 |
| 0.3469 | 14.0 | 1708 | 0.3637 | 0.8446 | 0.8120 | 0.8151 | 0.8135 |
| 0.3431 | 15.0 | 1830 | 0.3685 | 0.8396 | 0.8049 | 0.8165 | 0.8102 |
| 0.3275 | 16.0 | 1952 | 0.3664 | 0.8371 | 0.8017 | 0.8172 | 0.8086 |
| 0.3288 | 17.0 | 2074 | 0.3590 | 0.8396 | 0.8055 | 0.8115 | 0.8084 |
| 0.3335 | 18.0 | 2196 | 0.3607 | 0.8471 | 0.8138 | 0.8243 | 0.8187 |
| 0.3239 | 19.0 | 2318 | 0.3613 | 0.8446 | 0.8107 | 0.8226 | 0.8161 |
| 0.327 | 20.0 | 2440 | 0.3608 | 0.8471 | 0.8138 | 0.8243 | 0.8187 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
Kimata/gpt2-medium-Vizuosense | Kimata | "2023-11-22T15:45:26Z" | 0 | 1 | adapter-transformers | [
"adapter-transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:Kimata/gpt_driver_dataset_processed",
"region:us"
] | text-generation | "2023-11-22T15:41:53Z" | ---
datasets:
- Kimata/gpt_driver_dataset_processed
language:
- en
library_name: adapter-transformers
pipeline_tag: text-generation
--- |
Marco-Cheung/speecht5_finetuned_voxpopuli_de | Marco-Cheung | "2023-08-17T14:46:13Z" | 86 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-speech | "2023-08-17T08:02:16Z" | ---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
- text-to-speech
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_de
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4657
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5308 | 1.66 | 1000 | 0.4861 |
| 0.5124 | 3.33 | 2000 | 0.4732 |
| 0.5076 | 4.99 | 3000 | 0.4674 |
| 0.5051 | 6.65 | 4000 | 0.4657 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3 |
andrewatef/MyBloggerV0.19-main | andrewatef | "2024-01-22T23:02:34Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-01-22T23:01:49Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NeutralBlaster/q-FrozenLake-v1-8x8-no_slippery | NeutralBlaster | "2022-05-21T14:29:37Z" | 0 | 0 | null | [
"FrozenLake-v1-8x8-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2022-05-21T14:29:29Z" | ---
tags:
- FrozenLake-v1-8x8-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-no_slippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8-no_slippery
type: FrozenLake-v1-8x8-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="NeutralBlaster/q-FrozenLake-v1-8x8-no_slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Ertman/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-iridescent_tropical_starfish | Ertman | "2025-04-12T16:49:01Z" | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am iridescent tropical starfish",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-08T20:59:20Z" | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-iridescent_tropical_starfish
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am iridescent tropical starfish
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-iridescent_tropical_starfish
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Ertman/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-iridescent_tropical_starfish", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.1
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
netcat420/MFANN3bv0.24 | netcat420 | "2024-11-22T07:03:32Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"en",
"dataset:netcat420/MFANN",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-21T20:17:52Z" | ---
library_name: transformers
license: mit
datasets:
- netcat420/MFANN
language:
- en
---
MFANN 3b v0.24 trained on https://huggingface.co/datasets/netcat420/MFANN
system prompt:
Instruct: {instruction} Output:
based on https://huggingface.co/microsoft/phi-2/tree/main
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// |
RichardErkhov/allknowingroger_-_MultiverseBuddy-15B-MoE-gguf | RichardErkhov | "2024-06-27T11:54:27Z" | 9 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | "2024-06-27T08:41:19Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
MultiverseBuddy-15B-MoE - GGUF
- Model creator: https://huggingface.co/allknowingroger/
- Original model: https://huggingface.co/allknowingroger/MultiverseBuddy-15B-MoE/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [MultiverseBuddy-15B-MoE.Q2_K.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_MultiverseBuddy-15B-MoE-gguf/blob/main/MultiverseBuddy-15B-MoE.Q2_K.gguf) | Q2_K | 4.43GB |
| [MultiverseBuddy-15B-MoE.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_MultiverseBuddy-15B-MoE-gguf/blob/main/MultiverseBuddy-15B-MoE.IQ3_XS.gguf) | IQ3_XS | 4.95GB |
| [MultiverseBuddy-15B-MoE.IQ3_S.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_MultiverseBuddy-15B-MoE-gguf/blob/main/MultiverseBuddy-15B-MoE.IQ3_S.gguf) | IQ3_S | 5.22GB |
| [MultiverseBuddy-15B-MoE.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_MultiverseBuddy-15B-MoE-gguf/blob/main/MultiverseBuddy-15B-MoE.Q3_K_S.gguf) | Q3_K_S | 5.2GB |
| [MultiverseBuddy-15B-MoE.IQ3_M.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_MultiverseBuddy-15B-MoE-gguf/blob/main/MultiverseBuddy-15B-MoE.IQ3_M.gguf) | IQ3_M | 5.35GB |
| [MultiverseBuddy-15B-MoE.Q3_K.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_MultiverseBuddy-15B-MoE-gguf/blob/main/MultiverseBuddy-15B-MoE.Q3_K.gguf) | Q3_K | 5.78GB |
| [MultiverseBuddy-15B-MoE.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_MultiverseBuddy-15B-MoE-gguf/blob/main/MultiverseBuddy-15B-MoE.Q3_K_M.gguf) | Q3_K_M | 5.78GB |
| [MultiverseBuddy-15B-MoE.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_MultiverseBuddy-15B-MoE-gguf/blob/main/MultiverseBuddy-15B-MoE.Q3_K_L.gguf) | Q3_K_L | 6.27GB |
| [MultiverseBuddy-15B-MoE.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_MultiverseBuddy-15B-MoE-gguf/blob/main/MultiverseBuddy-15B-MoE.IQ4_XS.gguf) | IQ4_XS | 6.5GB |
| [MultiverseBuddy-15B-MoE.Q4_0.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_MultiverseBuddy-15B-MoE-gguf/blob/main/MultiverseBuddy-15B-MoE.Q4_0.gguf) | Q4_0 | 6.1GB |
| [MultiverseBuddy-15B-MoE.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_MultiverseBuddy-15B-MoE-gguf/blob/main/MultiverseBuddy-15B-MoE.IQ4_NL.gguf) | IQ4_NL | 6.85GB |
| [MultiverseBuddy-15B-MoE.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_MultiverseBuddy-15B-MoE-gguf/blob/main/MultiverseBuddy-15B-MoE.Q4_K_S.gguf) | Q4_K_S | 6.84GB |
| [MultiverseBuddy-15B-MoE.Q4_K.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_MultiverseBuddy-15B-MoE-gguf/blob/main/MultiverseBuddy-15B-MoE.Q4_K.gguf) | Q4_K | 7.25GB |
| [MultiverseBuddy-15B-MoE.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_MultiverseBuddy-15B-MoE-gguf/blob/main/MultiverseBuddy-15B-MoE.Q4_K_M.gguf) | Q4_K_M | 7.25GB |
| [MultiverseBuddy-15B-MoE.Q4_1.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_MultiverseBuddy-15B-MoE-gguf/blob/main/MultiverseBuddy-15B-MoE.Q4_1.gguf) | Q4_1 | 7.52GB |
| [MultiverseBuddy-15B-MoE.Q5_0.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_MultiverseBuddy-15B-MoE-gguf/blob/main/MultiverseBuddy-15B-MoE.Q5_0.gguf) | Q5_0 | 8.26GB |
| [MultiverseBuddy-15B-MoE.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_MultiverseBuddy-15B-MoE-gguf/blob/main/MultiverseBuddy-15B-MoE.Q5_K_S.gguf) | Q5_K_S | 8.26GB |
| [MultiverseBuddy-15B-MoE.Q5_K.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_MultiverseBuddy-15B-MoE-gguf/blob/main/MultiverseBuddy-15B-MoE.Q5_K.gguf) | Q5_K | 8.51GB |
| [MultiverseBuddy-15B-MoE.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_MultiverseBuddy-15B-MoE-gguf/blob/main/MultiverseBuddy-15B-MoE.Q5_K_M.gguf) | Q5_K_M | 8.51GB |
| [MultiverseBuddy-15B-MoE.Q5_1.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_MultiverseBuddy-15B-MoE-gguf/blob/main/MultiverseBuddy-15B-MoE.Q5_1.gguf) | Q5_1 | 9.01GB |
| [MultiverseBuddy-15B-MoE.Q6_K.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_MultiverseBuddy-15B-MoE-gguf/blob/main/MultiverseBuddy-15B-MoE.Q6_K.gguf) | Q6_K | 9.84GB |
| [MultiverseBuddy-15B-MoE.Q8_0.gguf](https://huggingface.co/RichardErkhov/allknowingroger_-_MultiverseBuddy-15B-MoE-gguf/blob/main/MultiverseBuddy-15B-MoE.Q8_0.gguf) | Q8_0 | 12.75GB |
Original model description:
---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- allknowingroger/MultiverseEx26-7B-slerp
- OpenBuddy/openbuddy-mistral2-7b-v20.2-32k
base_model:
- allknowingroger/MultiverseEx26-7B-slerp
- OpenBuddy/openbuddy-mistral2-7b-v20.2-32k
---
# MultiverseBuddy-15B-MoE
MultiverseBuddy-15B-MoE is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [allknowingroger/MultiverseEx26-7B-slerp](https://huggingface.co/allknowingroger/MultiverseEx26-7B-slerp)
* [OpenBuddy/openbuddy-mistral2-7b-v20.2-32k](https://huggingface.co/OpenBuddy/openbuddy-mistral2-7b-v20.2-32k)
## 🧩 Configuration
```yaml
base_model: allknowingroger/MultiverseEx26-7B-slerp
experts:
- source_model: allknowingroger/MultiverseEx26-7B-slerp
positive_prompts: ["what"]
- source_model: OpenBuddy/openbuddy-mistral2-7b-v20.2-32k
positive_prompts: ["think"]
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/MultiverseBuddy-15B-MoE"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
aroot/eng-fra-simcse_central_ssrb | aroot | "2023-07-06T20:12:02Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2023-07-06T19:47:43Z" | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_central_ssrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_central_ssrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1471
- Bleu: 31.8498
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
TransferRapid/whisper-large-v3-turbo_ro | TransferRapid | "2025-03-02T11:30:00Z" | 428 | 2 | null | [
"safetensors",
"whisper",
"speech",
"transcription",
"romanian",
"ro",
"dataset:TransferRapid/CommonVoices20_ro",
"base_model:openai/whisper-large-v3-turbo",
"base_model:finetune:openai/whisper-large-v3-turbo",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2025-02-06T14:27:56Z" | ---
license: cc-by-nc-4.0
language:
- ro
base_model:
- openai/whisper-large-v3-turbo
tags:
- speech
- transcription
- romanian
datasets:
- TransferRapid/CommonVoices20_ro
metrics:
- wer
- cer
---
# Whisper Large v3 Turbo (Romanian)
<h5 style="font-family: 'Calibri'; margin-bottom: 20px;">
<a href="https://huggingface.co/docs/transformers/model_doc/whisper" target="_blank">Whisper</a> is an automatic speech recognition (ASR) system developed by <a href="https://huggingface.co/openai" target="_blank">OpenAI</a>.
It can transcribe and translate spoken language into text with high accuracy, supporting multiple languages, accents, and noisy environments. It is designed for general-purpose speech processing and can handle various audio inputs.
</h5>
<h5 style="font-family: 'Calibri'; margin-bottom: 20px;">
<a href="https://huggingface.co/openai/whisper-large-v3-turbo" target="_blank">Whisper-large-v3-turbo</a> is an optimized version of OpenAI's <a href="https://huggingface.co/openai/whisper-large-v3" target="_blank">Whisper-large-v3</a> model, designed to enhance transcription speed while maintaining high accuracy.
This optimization is achieved by reducing the number of decoder layers from 32 to 4, resulting in a model that is significantly faster with only a minor decrease in transcription quality.
</h5>
<img src="https://miro.medium.com/v2/resize:fit:1400/format:webp/1*B9TP_mSq5o3F4Bjp17Q0lA.png" alt="Whisper Large v3 Turbo" width="750" style="display: block; margin: 20px auto;">
<a href="https://medium.com/axinc-ai/whisper-large-v3-turbo-high-accuracy-and-fast-speech-recognition-model-be2f6af77bdc" target="_blank">More details</a>
---
<h2>Fine-tune<h2>
<h5 style="font-family: 'Calibri'; margin-bottom: 20px;">
Under the guidance of project manager Ionuț Vișan, we have successfully fine-tuned the Whisper-large-v3-turbo model on the <a href="https://huggingface.co/datasets/TransferRapid/CommonVoices20_ro" target="_blank">Common Voices Corpus 20 (Romanian)</a> dataset,
consisting of 41,431 audio files (approximately 47 hours), each accompanied by its corresponding text transcription.
</h5>
<h5 style="font-family: 'Calibri'; margin-bottom: 2px;">
<strong>Before fine-tuning </strong> our model with the dataset, we assessed the word error rate (WER) and character error rate (CER) on the test set (test_common_voices20.csv) using the
pre-trained openai/whisper-large-v3-turbo model to establish baseline performance.
</h5>
<h5 style="font-family: 'Calibri'; margin-bottom: 2px;">
<strong>Base performance: </strong>
</h5>
<h5 style="font-family: 'Calibri'; margin-bottom: 2px;">
<ul>
<li><em>WER</em>: 20.72%</li>
<li><em>CER</em>: 6.50%</li>
</ul>
</h5>
---
<h2>Configuration<h2>
<h5 style="font-family: 'Calibri'; margin-bottom: 2px;">
<ul>
<li><strong>Trainable layers</strong> = all (encoder = 32, decoder = 4)</li>
<li><strong>Learning rate</strong> = 4e-6</li>
<li><strong>Batch size</strong> = 2 (for both dataloaders)</li>
<li><strong>Gradient accumulation steps</strong> = 8</li>
<li><strong>Optimizer</strong> = AdamW</li>
<li><strong>Weight decay</strong> = 0.2</li>
<li><strong>Epochs</strong> = 20</li>
<li><strong>Scheduler</strong> = Linear (with warmup = 0.1)</li>
</ul>
</h5>
<h5 style="font-family: 'Calibri'; margin-bottom: 2px;">
<strong>Dropout: </strong>
</h5>
<h5 style="font-family: 'Calibri'; margin-bottom: 2px;">
<ul>
<li><strong>Encoder</strong> = </li>
<ul style="list-style-type: none; padding-left: 2px;">
<li>0.2 if idx == 20 else</li>
<li>0.1 if idx in [21, 22, 29, 30] else 0.0</li>
</ul>
<li><strong>Decoder</strong> = </li>
<ul style="list-style-type: none; padding-left: 2px;">
<li>0.2 if idx == 1 else 0.1</li>
</ul>
</ul>
</h5>
<h5 style="font-family: 'Calibri'; margin-bottom: 20px;">
The condition for saving the model is that the test loss, Word Error Rate (WER),
and Character Error Rate (CER) must be lower than the previously recorded best values.
</h5>
---
<h2>Results</h2>
<h5 style="font-family: 'Calibri'; margin-bottom: 20px;">
The fine-tuning process took 6,360 minutes (106 hours) on a single NVIDIA RTX 4500 Ada Generation GPU.
</h5>
<img src="https://huggingface.co/TransferRapid/whisper-large-v3-turbo_ro/resolve/main/error_rates_plot.png"
alt="Error Rates Plot" width="500" style="margin-left: 10px;">
<img src="https://huggingface.co/TransferRapid/whisper-large-v3-turbo_ro/resolve/main/loss_plot.png"
alt="Loss Plot" width="500" style="margin-left: 10px;">
<img src="https://huggingface.co/TransferRapid/whisper-large-v3-turbo_ro/resolve/main/learning_rate_plot.png"
alt="Learning Rate Plot" width="500" style="margin-left: 10px;">
<img src="https://huggingface.co/TransferRapid/whisper-large-v3-turbo_ro/resolve/main/epoch_metrics.png"
alt="Fine-tuning Metrics" width="350" style="margin-left: 10px;">
<h5 style="font-family: 'Calibri'; margin-bottom: 5px;">
The fine-tuned model was saved at epoch 14 with new:
</h5>
<h5 style="font-family: 'Calibri'; margin-bottom: 2px;">
<ul>
<li><em>WER</em>: 4.69%</li>
<li><em>CER</em>: 1.22%</li>
</ul>
</h5>
---
<h2>How to use<h2>
<h5 style="font-family: 'Calibri'; margin-bottom: 20px;">
<strong>1. </strong>If you want to transcribe a <strong>mono-channel</strong> audio file (.wav) containing a
single speaker, use the following code:
</h5>
<details>
<summary><strong>Click to expand the code</strong></summary>
```python
from transformers import WhisperProcessor, WhisperForConditionalGeneration
import torchaudio
import torch
model_name = "TransferRapid/whisper-large-v3-turbo_ro"
# Load processor and model
processor = WhisperProcessor.from_pretrained(model_name)
model = WhisperForConditionalGeneration.from_pretrained(model_name)
# Move model to GPU if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
model.eval()
def preprocess_audio(audio_path, processor):
"""Preprocess audio: load, resample if needed, and convert to model input format."""
waveform, sample_rate = torchaudio.load(audio_path)
# Resample to 16kHz if needed
if sample_rate != 16000:
resampler = torchaudio.transforms.Resample(orig_freq=sample_rate, new_freq=16000)
waveform = resampler(waveform)
# Process audio into model input format
inputs = processor(waveform.squeeze().numpy(), sampling_rate=16000, return_tensors="pt")
# Move inputs to device
inputs = {key: val.to(device) for key, val in inputs.items()}
return inputs
def transcribe(audio_path, model, processor, language="romanian", task="transcribe"):
"""Generate transcription for an audio file."""
inputs = preprocess_audio(audio_path, processor)
forced_decoder_ids = processor.tokenizer.get_decoder_prompt_ids(language=language, task=task)
with torch.no_grad():
generated_ids = model.generate(inputs["input_features"], forced_decoder_ids=forced_decoder_ids)
transcription = processor.tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
return transcription[0]
# Define audio path
audio_file = "audio.wav"
# Run transcription
transcription = transcribe(audio_file, model, processor)
print("Transcription:", transcription)
```
</details>
<h5 style="font-family: 'Calibri'; margin-bottom: 20px;">
</h5>
<h5 style="font-family: 'Calibri'; margin-bottom: 12px;">
<strong>Example of result:</strong>
</h5>
<h5 style="font-family: 'Calibri'; margin-bottom: 40px;">
<strong>Transcript:</strong> Astăzi am avut o zi superbă.
</h5>
<h5 style="font-family: 'Calibri'; margin-bottom: 20px;">
<strong>2. </strong>If you want to transcribe a <strong>stereo</strong> audio file (.wav or .mp3) containing a conversation between
two speakers, use the following code:
</h5>
<details>
<summary><strong>Click to expand the code</strong></summary>
```python
import os
import torchaudio
import numpy as np
import librosa
import webrtcvad
import soundfile as sf
from pydub import AudioSegment
from transformers import WhisperProcessor, WhisperForConditionalGeneration
import torch
# Load model from Hugging Face
model_name = "TransferRapid/whisper-large-v3-turbo_ro"
processor = WhisperProcessor.from_pretrained(model_name)
model = WhisperForConditionalGeneration.from_pretrained(model_name)
# Move model to GPU if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
model.eval()
def convert_mp3_to_wav(mp3_file_path):
"""Convert MP3 to WAV (16kHz)."""
audio = AudioSegment.from_mp3(mp3_file_path)
wav_16k_file_path = mp3_file_path.replace(".mp3", "_16k.wav")
audio.set_frame_rate(16000).export(wav_16k_file_path, format="wav")
return wav_16k_file_path
def extract_audio_channels(wav_file_path):
"""Extract left and right channels from stereo WAV."""
y, sr = librosa.load(wav_file_path, sr=None, mono=False)
if len(y.shape) == 1:
mono_file = wav_file_path.replace(".wav", "_mono.wav")
sf.write(mono_file, y, sr)
return y, None, sr, mono_file, None
left_channel, right_channel = y[0], y[1]
left_file = wav_file_path.replace(".wav", "_left.wav")
right_file = wav_file_path.replace(".wav", "_right.wav")
sf.write(left_file, left_channel, sr)
sf.write(right_file, right_channel, sr)
return left_channel, right_channel, sr, left_file, right_file
def detect_speech_intervals(channel_data, sr, vad_level=3):
"""Detect speech activity using VAD (30ms frames)."""
vad = webrtcvad.Vad(vad_level)
frame_duration = 30
frame_length = int(sr * frame_duration / 1000)
frames = librosa.util.frame(channel_data, frame_length=frame_length, hop_length=frame_length)
speech_intervals = []
for i, frame in enumerate(frames.T):
pcm_data = (frame * np.iinfo(np.int16).max).astype(np.int16).tobytes()
if vad.is_speech(pcm_data, sr):
start_time, end_time = (i * frame_duration) / 1000, ((i + 1) * frame_duration) / 1000
speech_intervals.append((start_time, end_time))
return speech_intervals
def merge_intervals(intervals, merge_threshold=1):
"""Merge speech intervals with a gap smaller than merge_threshold."""
if not intervals:
return []
merged = [list(intervals[0])]
for start, end in intervals[1:]:
if (start - merged[-1][1]) <= merge_threshold:
merged[-1][1] = end
else:
merged.append([start, end])
return merged
def save_segments(channel_data, sr, intervals, output_dir="segments", prefix="segment"):
"""Save detected speech segments."""
os.makedirs(output_dir, exist_ok=True)
segment_paths = []
for idx, (start, end) in enumerate(intervals):
start_sample = int(start * sr)
end_sample = int(end * sr)
segment = channel_data[start_sample:end_sample]
segment_path = os.path.join(output_dir, f"{prefix}_{idx+1}.wav")
sf.write(segment_path, segment, sr)
segment_paths.append((start, end, segment_path, prefix))
return segment_paths
def preprocess_audio(audio_path, processor, device):
"""Preprocess audio: load, resample if needed, and convert to model input format."""
waveform, sample_rate = torchaudio.load(audio_path)
if sample_rate != 16000:
resampler = torchaudio.transforms.Resample(orig_freq=sample_rate, new_freq=16000)
waveform = resampler(waveform)
inputs = processor(waveform.squeeze().numpy(), sampling_rate=16000, return_tensors="pt")
inputs = {key: val.to(device) for key, val in inputs.items()}
return inputs
def transcribe(audio_path, model, processor, device, language="romanian", task="transcribe"):
"""Generate transcription for an audio file."""
inputs = preprocess_audio(audio_path, processor, device)
forced_decoder_ids = processor.tokenizer.get_decoder_prompt_ids(language=language, task=task)
with torch.no_grad():
generated_ids = model.generate(inputs["input_features"], forced_decoder_ids=forced_decoder_ids)
transcription = processor.tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
return transcription[0]
# Load audio file (MP3 or WAV)
audio_file = "audio.mp3"
# Convert MP3 to WAV if needed
if audio_file.endswith(".mp3"):
wav_file = convert_mp3_to_wav(audio_file)
else:
wav_file = audio_file
# Process stereo or mono file
left_channel, right_channel, sr, left_file, right_file = extract_audio_channels(wav_file)
# Process left channel (or mono)
if left_channel is not None:
left_intervals = detect_speech_intervals(left_channel, sr)
merged_left_intervals = merge_intervals(left_intervals)
left_segments = save_segments(left_channel, sr, merged_left_intervals, output_dir="left_segments", prefix="Left")
else:
left_segments = []
# Process right channel (if stereo)
if right_channel is not None:
right_intervals = detect_speech_intervals(right_channel, sr)
merged_right_intervals = merge_intervals(right_intervals)
right_segments = save_segments(right_channel, sr, merged_right_intervals, output_dir="right_segments", prefix="Right")
else:
right_segments = []
# Combine all segments and sort by start time
all_segments = left_segments + right_segments
all_segments.sort(key=lambda x: x[0])
# Transcribe each segment
for idx, (start, end, segment_path, channel) in enumerate(all_segments, start=1):
transcription = transcribe(segment_path, model, processor, device)
print(f"{idx}. {start:.2f}s → {end:.2f}s | {channel}: {transcription}")
```
</details>
<h5 style="font-family: 'Calibri'; margin-bottom: 20px;">
</h5>
<h5 style="font-family: 'Calibri'; margin-bottom: 12px;">
<strong>Example of result:</strong>
</h5>
<h5 style="font-family: 'Calibri'; margin-bottom: 5px;">
1. <strong>0.00s → 1.12s | Right:</strong> Bună ziua, Andreea este numele meu, cu ce vă pot ajuta?
</h5>
<h5 style="font-family: 'Calibri'; margin-bottom: 5px;">
2. <strong>1.43s → 2.54s | Left:</strong> Bună ziua doamna Andreea, Antonia mă numesc.
</h5>
<h5 style="font-family: 'Calibri'; margin-bottom: 5px;">
3. <strong>2.72s → 3.08s | Right:</strong> Bună Antonia.
</h5>
<h5 style="font-family: 'Calibri'; margin-bottom: 5px;">
4. <strong>3.41s → 5.75s | Left:</strong> Voiam doar să vă urez o zi frumoasă.
</h5>
<h5 style="font-family: 'Calibri'; margin-bottom: 5px;">
5. <strong>5.92s → 6.78s | Right:</strong> Ah, sunteți o scumpă.
</h5>
<h5 style="font-family: 'Calibri'; margin-bottom: 5px;">
6. <strong>6.94s → 7.81s | Left:</strong> Zi superbă, la revedere.
</h5>
<h5 style="font-family: 'Calibri'; margin-bottom: 5px;">
7. <strong>7.89s → 8.55s | Right:</strong> La fel, la revedere.
</h5>
---
<h2>Usage<h2>
<h5 style="font-family: 'Calibri'; margin-bottom: 2px;">
The model can be used for:
</h5>
<h5 style="font-family: 'Calibri'; margin-bottom: 2px;">
<ul>
<li><em>Advanced voice assistants</em></li>
<li><em>Automatic transcription</em></li>
<li><em>Live subtitling systems</em></li>
<li><em>Voice recognition for call centers</em></li>
<li><em>Voice commands for smart devices</em></li>
<li><em>Voice analysis for security (biometric authentication)</em></li>
<li><em>Dictation systems for writers and professionals</em></li>
<li><em>Assistive technology for people with disabilities</em></li>
</ul>
</h5>
---
<h2>Communication<h2>
<h5 style="font-family: 'Calibri'; margin-bottom: 2px;">
For any questions regarding this model or to explore collaborations on ambitious AI/ML projects, please feel free to contact us at:
</h5>
<h5 style="font-family: 'Calibri'; margin-bottom: 2px;">
<ul>
<li><em>[email protected]</em></li>
<li><em><a href="https://www.linkedin.com/in/ionut-visan/" target="_blank">Ionuț Vișan's Linkedin</a></em></li>
<li><em><a href="https://www.linkedin.com/company/transfer-rapid" target="_blank">Transfer Rapid's Linkedin</a></em></li>
</ul>
</h5> |
BFS-Search/mistral_DoCRED_multi_rel | BFS-Search | "2024-11-13T17:19:26Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-13T17:14:27Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
twodigit/teaching55 | twodigit | "2025-01-19T23:06:22Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-19T23:01:09Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/TinyYi-7B-Test-GGUF | mradermacher | "2025-01-13T19:27:56Z" | 240 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"en",
"base_model:yashmarathe/TinyYi-7B-Test",
"base_model:quantized:yashmarathe/TinyYi-7B-Test",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-01-12T21:41:28Z" | ---
base_model: yashmarathe/TinyYi-7B-Test
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/yashmarathe/TinyYi-7B-Test
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/TinyYi-7B-Test-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TinyYi-7B-Test-GGUF/resolve/main/TinyYi-7B-Test.Q2_K.gguf) | Q2_K | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/TinyYi-7B-Test-GGUF/resolve/main/TinyYi-7B-Test.Q3_K_S.gguf) | Q3_K_S | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/TinyYi-7B-Test-GGUF/resolve/main/TinyYi-7B-Test.Q3_K_M.gguf) | Q3_K_M | 3.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TinyYi-7B-Test-GGUF/resolve/main/TinyYi-7B-Test.Q3_K_L.gguf) | Q3_K_L | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/TinyYi-7B-Test-GGUF/resolve/main/TinyYi-7B-Test.IQ4_XS.gguf) | IQ4_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/TinyYi-7B-Test-GGUF/resolve/main/TinyYi-7B-Test.Q4_K_S.gguf) | Q4_K_S | 3.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TinyYi-7B-Test-GGUF/resolve/main/TinyYi-7B-Test.Q4_K_M.gguf) | Q4_K_M | 3.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TinyYi-7B-Test-GGUF/resolve/main/TinyYi-7B-Test.Q5_K_S.gguf) | Q5_K_S | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/TinyYi-7B-Test-GGUF/resolve/main/TinyYi-7B-Test.Q5_K_M.gguf) | Q5_K_M | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/TinyYi-7B-Test-GGUF/resolve/main/TinyYi-7B-Test.Q6_K.gguf) | Q6_K | 5.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TinyYi-7B-Test-GGUF/resolve/main/TinyYi-7B-Test.Q8_0.gguf) | Q8_0 | 6.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/TinyYi-7B-Test-GGUF/resolve/main/TinyYi-7B-Test.f16.gguf) | f16 | 12.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jinmang2/dall-e-tokenizer | jinmang2 | "2021-08-30T18:20:38Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | "2022-03-02T23:29:05Z" | # DALL-E-Tokenizer
Huggingface package for the discrete VAE usded for [DALL-E](https://github.com/openai/DALL-E).
# How to use
```python
# from dall_e_tok import DallEEncoder
from dall_e_tok import DALLETokenizer
tokenizer = DALLETokenizer.from_pretrained("jinmang2/dall-e-tokenizer")
```
|
Guanzheng/Qwen2.5-7B-Math-Openthink77k-SFT | Guanzheng | "2025-03-19T02:09:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-19T02:07:07Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
EpistemeAI/Huacayas-16B | EpistemeAI | "2025-03-06T04:46:31Z" | 29 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-04T06:01:51Z" | ---
library_name: transformers
license: mit
language:
- en
pipeline_tag: text-generation
---
# Model Card for Model ID
It is pre-fine tuned basic Huacayas-16B model. pretrained model. It will be future reasoning general focus 16B model.
This model has to be trained for inference.
## Model Details
Created custom architecture 16B and than created model usig the architecture.
This model uses Llama 3.2 tokenizer
### Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** EpistemeAI
- **License:** MIT
## Uses
Intended Use Cases: Huacayas 16B is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources.
### Out-of-Scope Use
Out of Scope: Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages beyond those explicitly referenced as supported in this model card.
## Bias, Risks, and Limitations
For these reasons, as with all LLMs, Huacayas 16B’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts.
[More Information Needed]
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jgkawell/jarvis | jgkawell | "2024-12-20T02:35:16Z" | 0 | 12 | null | [
"onnx",
"license:mit",
"region:us"
] | null | "2024-03-25T18:41:56Z" | ---
license: mit
---
Voice models that emulate the voice of JARVIS from the Marvel movies. Perfect to use for voice in Home Assistant: [docs](https://github.com/home-assistant/addons/blob/master/piper/DOCS.md#custom-voices)
If you want to use these models in Home Assistant using the Piper add-on, simply copy the `<MODEL>.onnx` and `<MODEL>.onnx.json` file into the `/share/piper` directory of Home Assistant. After restarting Home Assistant you should see the voice available when configuring a new Assistant. To do this, go to the "Assistants" page in the Home Assistant settings and click "Add Assistant" and choose the voice under "Text-to-speech".
|
Ramansh/RoBERTa-fake-news-detection | Ramansh | "2022-04-06T16:37:32Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-04-06T16:08:24Z" | ---
license: cc-by-nc-sa-4.0
---
A simple fake news detector that utilizes RoBERTa. <br/>
It was fine-tuned on [clmentbisaillon/fake-and-real-news-dataset](https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset) |
lapki/Llama-2-7b-panorama-QLoRA | lapki | "2023-09-19T13:01:53Z" | 7 | 1 | peft | [
"peft",
"llama",
"llama-2",
"news",
"text-generation",
"ru",
"dataset:its5Q/panorama",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | text-generation | "2023-07-28T13:24:15Z" | ---
language:
- ru
library_name: peft
tags:
- llama
- llama-2
- news
datasets:
- its5Q/panorama
pipeline_tag: text-generation
base_model: meta-llama/Llama-2-7b-hf
---
# Llama 2 7B, fine-tuned on Panorama media
This repo contains the QLoRA adapter.
Prompt:
```
Write a hypothetical news story based on the given headline
### Title:
{prompt}
Text:
```
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
### Additional information
Thanks [its5Q](https://huggingface.co/its5Q) for dataset and help |
markberry2010/unit2 | markberry2010 | "2024-02-06T10:51:41Z" | 0 | 0 | null | [
"CliffWalking-v0",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-02-06T08:50:32Z" | ---
tags:
- CliffWalking-v0
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: unit2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CliffWalking-v0
type: CliffWalking-v0
metrics:
- type: mean_reward
value: -13.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="markberry2010/unit2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
c14kevincardenas/swin-tiny-patch4-window7-224_alpha0.5_temp5.0_t3 | c14kevincardenas | "2025-03-18T18:24:09Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"swin",
"image-classification",
"knowledge_distillation",
"vision",
"generated_from_trainer",
"base_model:c14kevincardenas/ClimBEiT-t3",
"base_model:finetune:c14kevincardenas/ClimBEiT-t3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2025-02-28T22:20:10Z" | ---
library_name: transformers
license: apache-2.0
base_model: c14kevincardenas/ClimBEiT-t3
tags:
- knowledge_distillation
- vision
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224_alpha0.5_temp5.0_t3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224_alpha0.5_temp5.0_t3
This model is a fine-tuned version of [c14kevincardenas/ClimBEiT-t3](https://huggingface.co/c14kevincardenas/ClimBEiT-t3) on the c14kevincardenas/beta_caller_284_person_crop_seq_withlimb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6308
- Accuracy: 0.8178
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8115 | 1.0 | 164 | 1.4105 | 0.2560 |
| 0.7178 | 2.0 | 328 | 1.1599 | 0.4685 |
| 0.5783 | 3.0 | 492 | 0.9091 | 0.6616 |
| 0.4814 | 4.0 | 656 | 0.8235 | 0.7137 |
| 0.4242 | 5.0 | 820 | 0.7623 | 0.7560 |
| 0.3773 | 6.0 | 984 | 0.7324 | 0.7777 |
| 0.3503 | 7.0 | 1148 | 0.6889 | 0.8080 |
| 0.3418 | 8.0 | 1312 | 0.6577 | 0.8091 |
| 0.3284 | 9.0 | 1476 | 0.6747 | 0.8015 |
| 0.3015 | 10.0 | 1640 | 0.6572 | 0.8091 |
| 0.2979 | 11.0 | 1804 | 0.6616 | 0.8156 |
| 0.3018 | 12.0 | 1968 | 0.6517 | 0.8341 |
| 0.286 | 13.0 | 2132 | 0.6450 | 0.8308 |
| 0.2976 | 14.0 | 2296 | 0.6335 | 0.8330 |
| 0.2938 | 15.0 | 2460 | 0.6308 | 0.8178 |
| 0.2894 | 16.0 | 2624 | 0.6356 | 0.8297 |
| 0.2899 | 17.0 | 2788 | 0.6426 | 0.8330 |
| 0.2837 | 18.0 | 2952 | 0.6312 | 0.8286 |
| 0.2874 | 19.0 | 3116 | 0.6341 | 0.8275 |
| 0.28 | 20.0 | 3280 | 0.6328 | 0.8351 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Shero448/smog | Shero448 | "2025-03-19T22:19:40Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:John6666/wai-nsfw-illustrious-v110-sdxl",
"base_model:adapter:John6666/wai-nsfw-illustrious-v110-sdxl",
"region:us"
] | text-to-image | "2025-03-19T22:19:17Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
1girl, solo, long hair, breasts, looking at viewer, smile, green eyes, (huge
breasts:1.4), black hair, cleavage, green hair, multicolored hair,
pantyhose, bowtie, rabbit ears, leotard, wrist cuffs, strapless, fake animal
ears, detached collar, playboy bunny, rabbit tail, strapless leotard, green
leotard, blush, grin, horny,
parameters:
negative_prompt: >-
lowres, (worst quality, bad quality, censored:1.2), sweaty, penetration,
bad anatomy, text, jpeg artifacts, signature, watermark, sketch,
output:
url: images/00007-3206676568.png
base_model: John6666/wai-nsfw-illustrious-v110-sdxl
instance_prompt: smog
---
# smog
<Gallery />
## Trigger words
You should use `smog` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Shero448/smog/tree/main) them in the Files & versions tab.
|
krishnamk15/DeciLM7B-Merged | krishnamk15 | "2024-03-29T09:17:05Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"deci",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-03-29T08:39:12Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KantoRegion/test-lora-merged-hermione3-30 | KantoRegion | "2023-11-26T05:21:13Z" | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:adapter:meta-llama/Llama-2-13b-hf",
"region:us"
] | null | "2023-11-26T05:21:11Z" | ---
library_name: peft
base_model: meta-llama/Llama-2-13b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
herutriana44/crime_report_dataset | herutriana44 | "2025-03-28T12:51:25Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-28T10:09:20Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Kudod/my_fine_tuning_summary_t5_large_IA_model_hf | Kudod | "2024-02-20T08:06:37Z" | 3 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google-t5/t5-large",
"base_model:adapter:google-t5/t5-large",
"license:apache-2.0",
"region:us"
] | null | "2024-02-20T07:01:30Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
metrics:
- rouge
base_model: google-t5/t5-large
model-index:
- name: my_fine_tuning_summary_t5_large_IA_model_hf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_fine_tuning_summary_t5_large_IA_model_hf
This model is a fine-tuned version of [google-t5/t5-large](https://huggingface.co/google-t5/t5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.1345
- Rouge2: 0.0519
- Rougel: 0.1119
- Rougelsum: 0.112
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 989 | nan | 0.1345 | 0.0519 | 0.1119 | 0.112 | 19.0 |
| 0.0 | 2.0 | 1978 | nan | 0.1345 | 0.0519 | 0.1119 | 0.112 | 19.0 |
| 0.0 | 3.0 | 2967 | nan | 0.1345 | 0.0519 | 0.1119 | 0.112 | 19.0 |
| 0.0 | 4.0 | 3956 | nan | 0.1345 | 0.0519 | 0.1119 | 0.112 | 19.0 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.15.2 |
Alex48/poca-SoccerTwos-v16 | Alex48 | "2023-03-25T00:45:01Z" | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | "2023-03-25T00:44:55Z" |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Alex48/poca-SoccerTwos-v16
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
grapevine-AI/DeepSeek-R1-Distill-Qwen-32B-Japanese-GGUF | grapevine-AI | "2025-02-16T11:40:45Z" | 1,609 | 3 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-01-28T15:05:49Z" | ---
license: apache-2.0
---
# What is this?
CyberAgent社によるDeepSeek-R1-Distill-Qwen-32Bの日本語ファインチューニングモデル、[DeepSeek-R1-Distill-Qwen-32B-Japanese](https://huggingface.co/cyberagent/DeepSeek-R1-Distill-Qwen-32B-Japanese)をGGUFフォーマットに変換したものです。
# imatrix dataset
日本語能力を重視し、日本語が多量に含まれる[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)データセットを使用しました。<br>
また、CUDA版llama.cppがbfloat16に対応したため、imatrixの算出は本来の数値精度であるBF16のモデルを使用して行いました。
# Note
**llama.cpp-b4514以降と合わせてご利用ください。**
# Environment
Windows版llama.cpp-b4514およびllama.cpp-b4524同時リリースのconvert-hf-to-gguf.pyを使用して量子化作業を実施しました。
# License
Apache 2.0
# Developer
Alibaba Cloud & DeepSeek (深度求索) & CyberAgent |
Pedrampd/NLP-HW5-PosTaggerModel | Pedrampd | "2023-07-21T21:14:29Z" | 121 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-07-21T21:00:29Z" | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: NLP-HW5-PosTaggerModel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP-HW5-PosTaggerModel
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1278
- Accuracy: 0.9659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7026 | 1.0 | 878 | 0.1925 | 0.9493 |
| 0.1976 | 2.0 | 1756 | 0.1446 | 0.9610 |
| 0.157 | 3.0 | 2634 | 0.1278 | 0.9659 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Imanbehravan/bert-large-question-answering-finetuned-legal | Imanbehravan | "2024-07-14T13:57:32Z" | 26 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | question-answering | "2024-07-13T20:36:01Z" | ---
tags:
- generated_from_trainer
model-index:
- name: bert-large-question-answering-finetuned-legal
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-question-answering-finetuned-legal
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
leyla95/l2-ksu-whisper | leyla95 | "2025-04-09T08:09:52Z" | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-04-07T18:07:08Z" | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: l2-ksu-whisper
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# l2-ksu-whisper
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0147
- Wer: 1.3348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.1
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
huizhang0110/hui-embedding | huizhang0110 | "2024-11-26T05:15:59Z" | 0 | 0 | null | [
"mteb",
"model-index",
"region:us"
] | null | "2024-01-18T10:24:23Z" | ---
model-index:
- name: no_model_name_available
results:
- dataset:
config: en
name: MTEB STS22 (en)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 66.2368177379181
- type: cosine_spearman
value: 68.35446129213678
- type: euclidean_pearson
value: 68.35832044207704
- type: euclidean_spearman
value: 68.35446129213678
- type: main_score
value: 68.35446129213678
- type: manhattan_pearson
value: 68.70754373818515
- type: manhattan_spearman
value: 68.2292889016414
- type: pearson
value: 66.2368177379181
- type: spearman
value: 68.35446129213678
task:
type: STS
- dataset:
config: default
name: MTEB STS14 (default)
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
split: test
type: mteb/sts14-sts
metrics:
- type: cosine_pearson
value: 85.12461231748527
- type: cosine_spearman
value: 83.78377223012504
- type: euclidean_pearson
value: 84.84032421122767
- type: euclidean_spearman
value: 83.78376987896931
- type: main_score
value: 83.78377223012504
- type: manhattan_pearson
value: 84.97174244411761
- type: manhattan_spearman
value: 84.13202634643542
- type: pearson
value: 85.12461231748527
- type: spearman
value: 83.78377223012504
task:
type: STS
- dataset:
config: default
name: MTEB Touche2020 (default)
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
split: test
type: mteb/touche2020
metrics:
- type: main_score
value: 25.883
- type: map_at_1
value: 2.153
- type: map_at_10
value: 9.871
- type: map_at_100
value: 15.559000000000001
- type: map_at_1000
value: 17.183
- type: map_at_20
value: 12.552
- type: map_at_3
value: 5.493
- type: map_at_5
value: 7.85
- type: mrr_at_1
value: 30.612244897959183
- type: mrr_at_10
value: 48.89131843213475
- type: mrr_at_100
value: 49.6963561262702
- type: mrr_at_1000
value: 49.7010693279481
- type: mrr_at_20
value: 49.531452107982716
- type: mrr_at_3
value: 44.21768707482994
- type: mrr_at_5
value: 47.68707482993197
- type: nauc_map_at_1000_diff1
value: 25.31034571291797
- type: nauc_map_at_1000_max
value: 34.51576312061718
- type: nauc_map_at_1000_std
value: -4.906594382965329
- type: nauc_map_at_100_diff1
value: 25.837142212716476
- type: nauc_map_at_100_max
value: 32.59407997636304
- type: nauc_map_at_100_std
value: -10.217037670639481
- type: nauc_map_at_10_diff1
value: 33.21608048564407
- type: nauc_map_at_10_max
value: 37.468380135605706
- type: nauc_map_at_10_std
value: -20.46767738235632
- type: nauc_map_at_1_diff1
value: 32.281523854579106
- type: nauc_map_at_1_max
value: 22.176737258675068
- type: nauc_map_at_1_std
value: -25.07807730673564
- type: nauc_map_at_20_diff1
value: 30.866307166529584
- type: nauc_map_at_20_max
value: 32.272418879076724
- type: nauc_map_at_20_std
value: -20.40305363345012
- type: nauc_map_at_3_diff1
value: 30.88885591305534
- type: nauc_map_at_3_max
value: 33.431908247176786
- type: nauc_map_at_3_std
value: -19.503954175936993
- type: nauc_map_at_5_diff1
value: 34.08468180972433
- type: nauc_map_at_5_max
value: 40.256459257111935
- type: nauc_map_at_5_std
value: -18.56884658312989
- type: nauc_mrr_at_1000_diff1
value: 30.71882754790342
- type: nauc_mrr_at_1000_max
value: 14.576101913381093
- type: nauc_mrr_at_1000_std
value: -10.726757628242753
- type: nauc_mrr_at_100_diff1
value: 30.72979380373732
- type: nauc_mrr_at_100_max
value: 14.58962334045265
- type: nauc_mrr_at_100_std
value: -10.709231106839757
- type: nauc_mrr_at_10_diff1
value: 30.538894215258246
- type: nauc_mrr_at_10_max
value: 13.803938970422532
- type: nauc_mrr_at_10_std
value: -9.702168266086352
- type: nauc_mrr_at_1_diff1
value: 30.684478472773836
- type: nauc_mrr_at_1_max
value: 17.71761545127753
- type: nauc_mrr_at_1_std
value: -22.77705607353801
- type: nauc_mrr_at_20_diff1
value: 30.82506745472977
- type: nauc_mrr_at_20_max
value: 14.664189943251788
- type: nauc_mrr_at_20_std
value: -10.748922964408402
- type: nauc_mrr_at_3_diff1
value: 28.971974395355954
- type: nauc_mrr_at_3_max
value: 14.14445297613165
- type: nauc_mrr_at_3_std
value: -14.23741446560331
- type: nauc_mrr_at_5_diff1
value: 31.746911225636275
- type: nauc_mrr_at_5_max
value: 14.268610321705955
- type: nauc_mrr_at_5_std
value: -9.700708429060887
- type: nauc_ndcg_at_1000_diff1
value: 21.089489813171816
- type: nauc_ndcg_at_1000_max
value: 28.175842354263764
- type: nauc_ndcg_at_1000_std
value: 21.49424339507402
- type: nauc_ndcg_at_100_diff1
value: 19.8292750148825
- type: nauc_ndcg_at_100_max
value: 17.123814348188652
- type: nauc_ndcg_at_100_std
value: 6.051404399623092
- type: nauc_ndcg_at_10_diff1
value: 28.194702409547332
- type: nauc_ndcg_at_10_max
value: 18.97062064198259
- type: nauc_ndcg_at_10_std
value: -12.862439768903611
- type: nauc_ndcg_at_1_diff1
value: 30.684478472773836
- type: nauc_ndcg_at_1_max
value: 17.71761545127753
- type: nauc_ndcg_at_1_std
value: -22.77705607353801
- type: nauc_ndcg_at_20_diff1
value: 24.833493660655364
- type: nauc_ndcg_at_20_max
value: 16.53068197823132
- type: nauc_ndcg_at_20_std
value: -13.971353024276375
- type: nauc_ndcg_at_3_diff1
value: 29.840792656092052
- type: nauc_ndcg_at_3_max
value: 18.823207152450045
- type: nauc_ndcg_at_3_std
value: -12.753978007436833
- type: nauc_ndcg_at_5_diff1
value: 29.669577759746584
- type: nauc_ndcg_at_5_max
value: 24.204580513440916
- type: nauc_ndcg_at_5_std
value: -8.081655001819906
- type: nauc_precision_at_1000_diff1
value: -18.464873284114397
- type: nauc_precision_at_1000_max
value: 21.495318097003405
- type: nauc_precision_at_1000_std
value: 57.177192580535554
- type: nauc_precision_at_100_diff1
value: -4.067845048543001
- type: nauc_precision_at_100_max
value: 13.157305810279249
- type: nauc_precision_at_100_std
value: 51.20993669331124
- type: nauc_precision_at_10_diff1
value: 27.299848819776397
- type: nauc_precision_at_10_max
value: 15.622698996242287
- type: nauc_precision_at_10_std
value: -5.590347344162569
- type: nauc_precision_at_1_diff1
value: 30.684478472773836
- type: nauc_precision_at_1_max
value: 17.71761545127753
- type: nauc_precision_at_1_std
value: -22.77705607353801
- type: nauc_precision_at_20_diff1
value: 20.89429650870699
- type: nauc_precision_at_20_max
value: 15.544972379682054
- type: nauc_precision_at_20_std
value: 1.4293466620551607
- type: nauc_precision_at_3_diff1
value: 27.536001423592403
- type: nauc_precision_at_3_max
value: 19.633139870619367
- type: nauc_precision_at_3_std
value: -12.615890884253755
- type: nauc_precision_at_5_diff1
value: 27.120672981961334
- type: nauc_precision_at_5_max
value: 27.279847435518494
- type: nauc_precision_at_5_std
value: -4.87902522849883
- type: nauc_recall_at_1000_diff1
value: -2.8271060100732144
- type: nauc_recall_at_1000_max
value: 20.480146626345764
- type: nauc_recall_at_1000_std
value: 66.47919599815614
- type: nauc_recall_at_100_diff1
value: 12.101023414577305
- type: nauc_recall_at_100_max
value: 10.468322459855992
- type: nauc_recall_at_100_std
value: 18.442020075752115
- type: nauc_recall_at_10_diff1
value: 26.819559448061753
- type: nauc_recall_at_10_max
value: 16.76558631205096
- type: nauc_recall_at_10_std
value: -19.808438532692723
- type: nauc_recall_at_1_diff1
value: 32.281523854579106
- type: nauc_recall_at_1_max
value: 22.176737258675068
- type: nauc_recall_at_1_std
value: -25.07807730673564
- type: nauc_recall_at_20_diff1
value: 23.923306941170072
- type: nauc_recall_at_20_max
value: 11.835367892374284
- type: nauc_recall_at_20_std
value: -15.756707719745929
- type: nauc_recall_at_3_diff1
value: 24.836560375866597
- type: nauc_recall_at_3_max
value: 23.378529137896617
- type: nauc_recall_at_3_std
value: -20.181080283438245
- type: nauc_recall_at_5_diff1
value: 28.37846887432799
- type: nauc_recall_at_5_max
value: 29.201940847759573
- type: nauc_recall_at_5_std
value: -16.353497357324052
- type: ndcg_at_1
value: 30.612000000000002
- type: ndcg_at_10
value: 25.883
- type: ndcg_at_100
value: 36.213
- type: ndcg_at_1000
value: 47.952
- type: ndcg_at_20
value: 27.309
- type: ndcg_at_3
value: 30.532999999999998
- type: ndcg_at_5
value: 29.494999999999997
- type: precision_at_1
value: 30.612000000000002
- type: precision_at_10
value: 21.837
- type: precision_at_100
value: 7.286
- type: precision_at_1000
value: 1.488
- type: precision_at_20
value: 18.061
- type: precision_at_3
value: 31.293
- type: precision_at_5
value: 29.387999999999998
- type: recall_at_1
value: 2.153
- type: recall_at_10
value: 15.836
- type: recall_at_100
value: 44.199
- type: recall_at_1000
value: 79.809
- type: recall_at_20
value: 24.375
- type: recall_at_3
value: 6.729
- type: recall_at_5
value: 10.829
task:
type: Retrieval
- dataset:
config: en
name: MTEB AmazonCounterfactualClassification (en)
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
split: test
type: mteb/amazon_counterfactual
metrics:
- type: accuracy
value: 93.46268656716417
- type: ap
value: 73.16905656173336
- type: ap_weighted
value: 73.16905656173336
- type: f1
value: 89.9835927572066
- type: f1_weighted
value: 93.578175628546
- type: main_score
value: 93.46268656716417
task:
type: Classification
- dataset:
config: default
name: MTEB AmazonPolarityClassification (default)
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
split: test
type: mteb/amazon_polarity
metrics:
- type: accuracy
value: 96.64014999999999
- type: ap
value: 94.75468802312224
- type: ap_weighted
value: 94.75468802312224
- type: f1
value: 96.63929533118718
- type: f1_weighted
value: 96.63929533118718
- type: main_score
value: 96.64014999999999
task:
type: Classification
- dataset:
config: en
name: MTEB AmazonReviewsClassification (en)
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
split: test
type: mteb/amazon_reviews_multi
metrics:
- type: accuracy
value: 63.970000000000006
- type: f1
value: 62.682765229278615
- type: f1_weighted
value: 62.682765229278615
- type: main_score
value: 63.970000000000006
task:
type: Classification
- dataset:
config: default
name: MTEB ArguAna (default)
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
split: test
type: mteb/arguana
metrics:
- type: main_score
value: 67.323
- type: map_at_1
value: 45.448
- type: map_at_10
value: 60.18599999999999
- type: map_at_100
value: 60.687999999999995
- type: map_at_1000
value: 60.690999999999995
- type: map_at_20
value: 60.563
- type: map_at_3
value: 57.053
- type: map_at_5
value: 58.867000000000004
- type: mrr_at_1
value: 45.59032716927454
- type: mrr_at_10
value: 60.22429384271503
- type: mrr_at_100
value: 60.72592259489321
- type: mrr_at_1000
value: 60.72916244163348
- type: mrr_at_20
value: 60.60091997479985
- type: mrr_at_3
value: 57.11237553342832
- type: mrr_at_5
value: 58.90469416785227
- type: nauc_map_at_1000_diff1
value: 33.068441925532255
- type: nauc_map_at_1000_max
value: 10.276888386284378
- type: nauc_map_at_1000_std
value: -16.78833416335307
- type: nauc_map_at_100_diff1
value: 33.07060210913634
- type: nauc_map_at_100_max
value: 10.282642963249359
- type: nauc_map_at_100_std
value: -16.781593405086635
- type: nauc_map_at_10_diff1
value: 32.83665966609534
- type: nauc_map_at_10_max
value: 10.577416388110274
- type: nauc_map_at_10_std
value: -16.505603731786895
- type: nauc_map_at_1_diff1
value: 38.109973823503346
- type: nauc_map_at_1_max
value: 7.608684545790856
- type: nauc_map_at_1_std
value: -17.893865428628576
- type: nauc_map_at_20_diff1
value: 33.044589968115
- type: nauc_map_at_20_max
value: 10.375042647373576
- type: nauc_map_at_20_std
value: -16.6822453639938
- type: nauc_map_at_3_diff1
value: 32.277891718391814
- type: nauc_map_at_3_max
value: 9.850443641282443
- type: nauc_map_at_3_std
value: -17.94797851381197
- type: nauc_map_at_5_diff1
value: 32.16092173570638
- type: nauc_map_at_5_max
value: 10.209270409598851
- type: nauc_map_at_5_std
value: -17.465881200007004
- type: nauc_mrr_at_1000_diff1
value: 32.827813536418006
- type: nauc_mrr_at_1000_max
value: 10.087021629677352
- type: nauc_mrr_at_1000_std
value: -16.967746341911923
- type: nauc_mrr_at_100_diff1
value: 32.83000077148736
- type: nauc_mrr_at_100_max
value: 10.092796216302164
- type: nauc_mrr_at_100_std
value: -16.960987105341093
- type: nauc_mrr_at_10_diff1
value: 32.60032888130517
- type: nauc_mrr_at_10_max
value: 10.390784050073744
- type: nauc_mrr_at_10_std
value: -16.681959182829477
- type: nauc_mrr_at_1_diff1
value: 37.728857246219
- type: nauc_mrr_at_1_max
value: 7.4467908121287465
- type: nauc_mrr_at_1_std
value: -18.30248538693518
- type: nauc_mrr_at_20_diff1
value: 32.80506350021981
- type: nauc_mrr_at_20_max
value: 10.186006965165907
- type: nauc_mrr_at_20_std
value: -16.86087660734542
- type: nauc_mrr_at_3_diff1
value: 32.19594731244019
- type: nauc_mrr_at_3_max
value: 9.803200657757092
- type: nauc_mrr_at_3_std
value: -18.11910256146044
- type: nauc_mrr_at_5_diff1
value: 31.933881085281225
- type: nauc_mrr_at_5_max
value: 10.029923020334538
- type: nauc_mrr_at_5_std
value: -17.635162099540345
- type: nauc_ndcg_at_1000_diff1
value: 32.518889050927
- type: nauc_ndcg_at_1000_max
value: 10.875658070812662
- type: nauc_ndcg_at_1000_std
value: -16.286324059189997
- type: nauc_ndcg_at_100_diff1
value: 32.57162076556983
- type: nauc_ndcg_at_100_max
value: 11.011136236680544
- type: nauc_ndcg_at_100_std
value: -16.1394614926114
- type: nauc_ndcg_at_10_diff1
value: 31.531506473288175
- type: nauc_ndcg_at_10_max
value: 12.417307560165447
- type: nauc_ndcg_at_10_std
value: -14.76971088523127
- type: nauc_ndcg_at_1_diff1
value: 38.109973823503346
- type: nauc_ndcg_at_1_max
value: 7.608684545790856
- type: nauc_ndcg_at_1_std
value: -17.893865428628576
- type: nauc_ndcg_at_20_diff1
value: 32.34260978744937
- type: nauc_ndcg_at_20_max
value: 11.698122482769248
- type: nauc_ndcg_at_20_std
value: -15.360551678773856
- type: nauc_ndcg_at_3_diff1
value: 30.412571299678465
- type: nauc_ndcg_at_3_max
value: 10.694959789521832
- type: nauc_ndcg_at_3_std
value: -18.138119030741954
- type: nauc_ndcg_at_5_diff1
value: 29.96746423000831
- type: nauc_ndcg_at_5_max
value: 11.382928004181887
- type: nauc_ndcg_at_5_std
value: -17.31473188362318
- type: nauc_precision_at_1000_diff1
value: 17.914806895369583
- type: nauc_precision_at_1000_max
value: 24.542936056736938
- type: nauc_precision_at_1000_std
value: 9.032925153517976
- type: nauc_precision_at_100_diff1
value: 36.40038451420755
- type: nauc_precision_at_100_max
value: 44.12404870553998
- type: nauc_precision_at_100_std
value: 23.899082906071847
- type: nauc_precision_at_10_diff1
value: 22.531117645662295
- type: nauc_precision_at_10_max
value: 28.061598506640568
- type: nauc_precision_at_10_std
value: 1.4390989358928021
- type: nauc_precision_at_1_diff1
value: 38.109973823503346
- type: nauc_precision_at_1_max
value: 7.608684545790856
- type: nauc_precision_at_1_std
value: -17.893865428628576
- type: nauc_precision_at_20_diff1
value: 27.52248228295167
- type: nauc_precision_at_20_max
value: 31.544335924785592
- type: nauc_precision_at_20_std
value: 7.8837210646197144
- type: nauc_precision_at_3_diff1
value: 23.746154368105525
- type: nauc_precision_at_3_max
value: 13.770751722927347
- type: nauc_precision_at_3_std
value: -18.895725316115847
- type: nauc_precision_at_5_diff1
value: 20.01291443486786
- type: nauc_precision_at_5_max
value: 16.77718143645159
- type: nauc_precision_at_5_std
value: -16.639028720975606
- type: nauc_recall_at_1000_diff1
value: 17.914806895363768
- type: nauc_recall_at_1000_max
value: 24.542936056730795
- type: nauc_recall_at_1000_std
value: 9.032925153519013
- type: nauc_recall_at_100_diff1
value: 36.400384514208625
- type: nauc_recall_at_100_max
value: 44.1240487055395
- type: nauc_recall_at_100_std
value: 23.899082906069786
- type: nauc_recall_at_10_diff1
value: 22.531117645662015
- type: nauc_recall_at_10_max
value: 28.06159850664045
- type: nauc_recall_at_10_std
value: 1.4390989358927389
- type: nauc_recall_at_1_diff1
value: 38.109973823503346
- type: nauc_recall_at_1_max
value: 7.608684545790856
- type: nauc_recall_at_1_std
value: -17.893865428628576
- type: nauc_recall_at_20_diff1
value: 27.52248228295106
- type: nauc_recall_at_20_max
value: 31.54433592478498
- type: nauc_recall_at_20_std
value: 7.883721064619468
- type: nauc_recall_at_3_diff1
value: 23.746154368105564
- type: nauc_recall_at_3_max
value: 13.770751722927372
- type: nauc_recall_at_3_std
value: -18.89572531611582
- type: nauc_recall_at_5_diff1
value: 20.01291443486774
- type: nauc_recall_at_5_max
value: 16.77718143645164
- type: nauc_recall_at_5_std
value: -16.639028720975606
- type: ndcg_at_1
value: 45.448
- type: ndcg_at_10
value: 67.323
- type: ndcg_at_100
value: 69.484
- type: ndcg_at_1000
value: 69.544
- type: ndcg_at_20
value: 68.644
- type: ndcg_at_3
value: 60.865
- type: ndcg_at_5
value: 64.125
- type: precision_at_1
value: 45.448
- type: precision_at_10
value: 8.969000000000001
- type: precision_at_100
value: 0.992
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.74
- type: precision_at_3
value: 23.968999999999998
- type: precision_at_5
value: 15.959999999999999
- type: recall_at_1
value: 45.448
- type: recall_at_10
value: 89.687
- type: recall_at_100
value: 99.21799999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_20
value: 94.808
- type: recall_at_3
value: 71.906
- type: recall_at_5
value: 79.801
task:
type: Retrieval
- dataset:
config: default
name: MTEB ArxivClusteringP2P (default)
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
split: test
type: mteb/arxiv-clustering-p2p
metrics:
- type: main_score
value: 54.266038386390726
- type: v_measure
value: 54.266038386390726
- type: v_measure_std
value: 14.60711085325104
task:
type: Clustering
- dataset:
config: default
name: MTEB ArxivClusteringS2S (default)
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
split: test
type: mteb/arxiv-clustering-s2s
metrics:
- type: main_score
value: 49.82450675178832
- type: v_measure
value: 49.82450675178832
- type: v_measure_std
value: 14.692705635234821
task:
type: Clustering
- dataset:
config: default
name: MTEB AskUbuntuDupQuestions (default)
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
split: test
type: mteb/askubuntudupquestions-reranking
metrics:
- type: main_score
value: 63.50490353933854
- type: map
value: 63.50490353933854
- type: mrr
value: 76.79395858066218
- type: nAUC_map_diff1
value: 17.162853733308793
- type: nAUC_map_max
value: 24.966054539639252
- type: nAUC_map_std
value: 17.887481717389274
- type: nAUC_mrr_diff1
value: 19.033169471151794
- type: nAUC_mrr_max
value: 37.52423297117104
- type: nAUC_mrr_std
value: 13.636799292425081
task:
type: Reranking
- dataset:
config: default
name: MTEB BIOSSES (default)
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
split: test
type: mteb/biosses-sts
metrics:
- type: cosine_pearson
value: 82.07979992743572
- type: cosine_spearman
value: 80.97112209037952
- type: euclidean_pearson
value: 80.726157419205
- type: euclidean_spearman
value: 80.97112209037952
- type: main_score
value: 80.97112209037952
- type: manhattan_pearson
value: 80.75553649447407
- type: manhattan_spearman
value: 81.35366092726835
- type: pearson
value: 82.07979992743572
- type: spearman
value: 80.97112209037952
task:
type: STS
- dataset:
config: default
name: MTEB Banking77Classification (default)
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
split: test
type: mteb/banking77
metrics:
- type: accuracy
value: 84.29870129870129
- type: f1
value: 83.68342680640103
- type: f1_weighted
value: 83.68342680640104
- type: main_score
value: 84.29870129870129
task:
type: Classification
- dataset:
config: default
name: MTEB BiorxivClusteringP2P (default)
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
split: test
type: mteb/biorxiv-clustering-p2p
metrics:
- type: main_score
value: 47.21939028186225
- type: v_measure
value: 47.21939028186225
- type: v_measure_std
value: 0.6399652619210745
task:
type: Clustering
- dataset:
config: default
name: MTEB BiorxivClusteringS2S (default)
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
split: test
type: mteb/biorxiv-clustering-s2s
metrics:
- type: main_score
value: 43.904503083954765
- type: v_measure
value: 43.904503083954765
- type: v_measure_std
value: 0.6741506180366014
task:
type: Clustering
- dataset:
config: default
name: MTEB CQADupstackAndroidRetrieval (default)
revision: f46a197baaae43b4f621051089b82a364682dfeb
split: test
type: mteb/cqadupstack-android
metrics:
- type: main_score
value: 50.441
- type: map_at_1
value: 30.581000000000003
- type: map_at_10
value: 43.536
- type: map_at_100
value: 45.086999999999996
- type: map_at_1000
value: 45.189
- type: map_at_20
value: 44.37
- type: map_at_3
value: 39.769
- type: map_at_5
value: 42.028999999999996
- type: mrr_at_1
value: 36.9098712446352
- type: mrr_at_10
value: 48.776483411676516
- type: mrr_at_100
value: 49.51782521965795
- type: mrr_at_1000
value: 49.5505304504549
- type: mrr_at_20
value: 49.20302191439193
- type: mrr_at_3
value: 45.85121602288984
- type: mrr_at_5
value: 47.696709585121575
- type: nauc_map_at_1000_diff1
value: 50.77442312287549
- type: nauc_map_at_1000_max
value: 34.33647599905781
- type: nauc_map_at_1000_std
value: -13.748516605067781
- type: nauc_map_at_100_diff1
value: 50.75894617435753
- type: nauc_map_at_100_max
value: 34.35502812001472
- type: nauc_map_at_100_std
value: -13.736841648468175
- type: nauc_map_at_10_diff1
value: 50.87929446622231
- type: nauc_map_at_10_max
value: 34.27157508239978
- type: nauc_map_at_10_std
value: -14.526407596674309
- type: nauc_map_at_1_diff1
value: 57.0909475560327
- type: nauc_map_at_1_max
value: 32.2288149431883
- type: nauc_map_at_1_std
value: -13.370874900310689
- type: nauc_map_at_20_diff1
value: 50.63798885082635
- type: nauc_map_at_20_max
value: 34.26498561214445
- type: nauc_map_at_20_std
value: -13.93188362561783
- type: nauc_map_at_3_diff1
value: 52.5737761085553
- type: nauc_map_at_3_max
value: 33.76013333419806
- type: nauc_map_at_3_std
value: -13.849008988263117
- type: nauc_map_at_5_diff1
value: 51.19968604216378
- type: nauc_map_at_5_max
value: 33.54095507132505
- type: nauc_map_at_5_std
value: -14.620211074645637
- type: nauc_mrr_at_1000_diff1
value: 48.38609356318301
- type: nauc_mrr_at_1000_max
value: 33.98679377266471
- type: nauc_mrr_at_1000_std
value: -13.759418374094038
- type: nauc_mrr_at_100_diff1
value: 48.37236116991555
- type: nauc_mrr_at_100_max
value: 33.978575821483865
- type: nauc_mrr_at_100_std
value: -13.748715391580502
- type: nauc_mrr_at_10_diff1
value: 48.20980705221954
- type: nauc_mrr_at_10_max
value: 33.97030796624786
- type: nauc_mrr_at_10_std
value: -14.023184119296047
- type: nauc_mrr_at_1_diff1
value: 52.835554088618565
- type: nauc_mrr_at_1_max
value: 34.736747824514026
- type: nauc_mrr_at_1_std
value: -14.782309133752246
- type: nauc_mrr_at_20_diff1
value: 48.185661393251586
- type: nauc_mrr_at_20_max
value: 33.92181383095129
- type: nauc_mrr_at_20_std
value: -13.749958473599438
- type: nauc_mrr_at_3_diff1
value: 49.06255086663413
- type: nauc_mrr_at_3_max
value: 34.24245966485257
- type: nauc_mrr_at_3_std
value: -14.121042079344855
- type: nauc_mrr_at_5_diff1
value: 48.02661914739764
- type: nauc_mrr_at_5_max
value: 33.54319852163983
- type: nauc_mrr_at_5_std
value: -14.40749724842102
- type: nauc_ndcg_at_1000_diff1
value: 48.93136634666757
- type: nauc_ndcg_at_1000_max
value: 34.3178528230429
- type: nauc_ndcg_at_1000_std
value: -11.95564837442876
- type: nauc_ndcg_at_100_diff1
value: 48.37542091427922
- type: nauc_ndcg_at_100_max
value: 34.41374261950128
- type: nauc_ndcg_at_100_std
value: -11.526876720456004
- type: nauc_ndcg_at_10_diff1
value: 48.27862794425633
- type: nauc_ndcg_at_10_max
value: 34.06415523516767
- type: nauc_ndcg_at_10_std
value: -14.602823441995778
- type: nauc_ndcg_at_1_diff1
value: 52.835554088618565
- type: nauc_ndcg_at_1_max
value: 34.736747824514026
- type: nauc_ndcg_at_1_std
value: -14.782309133752246
- type: nauc_ndcg_at_20_diff1
value: 47.6519433010848
- type: nauc_ndcg_at_20_max
value: 33.800628012770034
- type: nauc_ndcg_at_20_std
value: -12.852071902619322
- type: nauc_ndcg_at_3_diff1
value: 50.40632210084943
- type: nauc_ndcg_at_3_max
value: 34.15616519598939
- type: nauc_ndcg_at_3_std
value: -14.396914052394957
- type: nauc_ndcg_at_5_diff1
value: 48.2287768686924
- type: nauc_ndcg_at_5_max
value: 32.68281782116356
- type: nauc_ndcg_at_5_std
value: -15.15658424373146
- type: nauc_precision_at_1000_diff1
value: -16.822042402923493
- type: nauc_precision_at_1000_max
value: -13.459387124925234
- type: nauc_precision_at_1000_std
value: -4.684574162765856
- type: nauc_precision_at_100_diff1
value: -12.950405503358223
- type: nauc_precision_at_100_max
value: -3.6973387744248694
- type: nauc_precision_at_100_std
value: 1.0686120361051838
- type: nauc_precision_at_10_diff1
value: 5.680154771052575
- type: nauc_precision_at_10_max
value: 17.15052960292624
- type: nauc_precision_at_10_std
value: -8.839454848202234
- type: nauc_precision_at_1_diff1
value: 52.835554088618565
- type: nauc_precision_at_1_max
value: 34.736747824514026
- type: nauc_precision_at_1_std
value: -14.782309133752246
- type: nauc_precision_at_20_diff1
value: -4.147057156482801
- type: nauc_precision_at_20_max
value: 8.943409955940282
- type: nauc_precision_at_20_std
value: -1.1556219667822423
- type: nauc_precision_at_3_diff1
value: 28.31897232747757
- type: nauc_precision_at_3_max
value: 28.82890100390525
- type: nauc_precision_at_3_std
value: -14.111275428339304
- type: nauc_precision_at_5_diff1
value: 15.554085244274193
- type: nauc_precision_at_5_max
value: 20.934501596265694
- type: nauc_precision_at_5_std
value: -12.783947594197997
- type: nauc_recall_at_1000_diff1
value: 41.806713621678924
- type: nauc_recall_at_1000_max
value: 45.83932148789026
- type: nauc_recall_at_1000_std
value: 43.35832012725718
- type: nauc_recall_at_100_diff1
value: 33.259689472107425
- type: nauc_recall_at_100_max
value: 34.680606932499764
- type: nauc_recall_at_100_std
value: 13.022981792106265
- type: nauc_recall_at_10_diff1
value: 39.674972215899004
- type: nauc_recall_at_10_max
value: 31.571411194709793
- type: nauc_recall_at_10_std
value: -13.917597013140865
- type: nauc_recall_at_1_diff1
value: 57.0909475560327
- type: nauc_recall_at_1_max
value: 32.2288149431883
- type: nauc_recall_at_1_std
value: -13.370874900310689
- type: nauc_recall_at_20_diff1
value: 34.82477941545416
- type: nauc_recall_at_20_max
value: 29.419097652367583
- type: nauc_recall_at_20_std
value: -6.753466274035959
- type: nauc_recall_at_3_diff1
value: 47.05993622483161
- type: nauc_recall_at_3_max
value: 31.788946521479673
- type: nauc_recall_at_3_std
value: -12.589599804850593
- type: nauc_recall_at_5_diff1
value: 40.8980124793814
- type: nauc_recall_at_5_max
value: 28.169478524380477
- type: nauc_recall_at_5_std
value: -15.058399770454422
- type: ndcg_at_1
value: 36.91
- type: ndcg_at_10
value: 50.441
- type: ndcg_at_100
value: 55.986999999999995
- type: ndcg_at_1000
value: 57.50999999999999
- type: ndcg_at_20
value: 52.588
- type: ndcg_at_3
value: 45.039
- type: ndcg_at_5
value: 47.908
- type: precision_at_1
value: 36.91
- type: precision_at_10
value: 9.771
- type: precision_at_100
value: 1.5779999999999998
- type: precision_at_1000
value: 0.199
- type: precision_at_20
value: 5.808
- type: precision_at_3
value: 22.222
- type: precision_at_5
value: 16.28
- type: recall_at_1
value: 30.581000000000003
- type: recall_at_10
value: 64.43799999999999
- type: recall_at_100
value: 87.439
- type: recall_at_1000
value: 96.682
- type: recall_at_20
value: 72.021
- type: recall_at_3
value: 49.119
- type: recall_at_5
value: 56.650999999999996
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackEnglishRetrieval (default)
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
split: test
type: mteb/cqadupstack-english
metrics:
- type: main_score
value: 48.052
- type: map_at_1
value: 31.691000000000003
- type: map_at_10
value: 42.25
- type: map_at_100
value: 43.466
- type: map_at_1000
value: 43.592
- type: map_at_20
value: 42.925000000000004
- type: map_at_3
value: 39.196999999999996
- type: map_at_5
value: 40.837
- type: mrr_at_1
value: 40.0
- type: mrr_at_10
value: 48.40979172985544
- type: mrr_at_100
value: 49.01329345568664
- type: mrr_at_1000
value: 49.05317333733556
- type: mrr_at_20
value: 48.757963347938926
- type: mrr_at_3
value: 46.18895966029725
- type: mrr_at_5
value: 47.45647558386417
- type: nauc_map_at_1000_diff1
value: 52.63721197705168
- type: nauc_map_at_1000_max
value: 34.927748424948255
- type: nauc_map_at_1000_std
value: 1.0444719278570702
- type: nauc_map_at_100_diff1
value: 52.66002218018987
- type: nauc_map_at_100_max
value: 34.89878215864321
- type: nauc_map_at_100_std
value: 0.9008516460644733
- type: nauc_map_at_10_diff1
value: 52.889235851315775
- type: nauc_map_at_10_max
value: 34.6922736480049
- type: nauc_map_at_10_std
value: -0.6506284048085285
- type: nauc_map_at_1_diff1
value: 56.732243764713175
- type: nauc_map_at_1_max
value: 30.49325212155099
- type: nauc_map_at_1_std
value: -5.04800794470186
- type: nauc_map_at_20_diff1
value: 52.8862850560935
- type: nauc_map_at_20_max
value: 34.829038965108325
- type: nauc_map_at_20_std
value: 0.02178642495237562
- type: nauc_map_at_3_diff1
value: 53.608193764117
- type: nauc_map_at_3_max
value: 33.53981267373349
- type: nauc_map_at_3_std
value: -3.040418170003493
- type: nauc_map_at_5_diff1
value: 53.39851810143899
- type: nauc_map_at_5_max
value: 34.5516659463275
- type: nauc_map_at_5_std
value: -1.4969739346974889
- type: nauc_mrr_at_1000_diff1
value: 51.8960971254646
- type: nauc_mrr_at_1000_max
value: 37.39091504745532
- type: nauc_mrr_at_1000_std
value: 5.037970602087237
- type: nauc_mrr_at_100_diff1
value: 51.881385486300225
- type: nauc_mrr_at_100_max
value: 37.38614133569158
- type: nauc_mrr_at_100_std
value: 5.034384753845119
- type: nauc_mrr_at_10_diff1
value: 51.77335216991783
- type: nauc_mrr_at_10_max
value: 37.61929128133669
- type: nauc_mrr_at_10_std
value: 4.912421162621211
- type: nauc_mrr_at_1_diff1
value: 55.97789723641661
- type: nauc_mrr_at_1_max
value: 38.07741378971052
- type: nauc_mrr_at_1_std
value: 3.1114912067800407
- type: nauc_mrr_at_20_diff1
value: 51.924932204924964
- type: nauc_mrr_at_20_max
value: 37.43188155675892
- type: nauc_mrr_at_20_std
value: 4.912649497021889
- type: nauc_mrr_at_3_diff1
value: 52.62682614740191
- type: nauc_mrr_at_3_max
value: 37.79696523235296
- type: nauc_mrr_at_3_std
value: 4.297604310897065
- type: nauc_mrr_at_5_diff1
value: 51.93341098564305
- type: nauc_mrr_at_5_max
value: 37.52261609729754
- type: nauc_mrr_at_5_std
value: 4.798233142719436
- type: nauc_ndcg_at_1000_diff1
value: 50.48831175822571
- type: nauc_ndcg_at_1000_max
value: 34.954324628161515
- type: nauc_ndcg_at_1000_std
value: 5.914974932163024
- type: nauc_ndcg_at_100_diff1
value: 50.22642462713412
- type: nauc_ndcg_at_100_max
value: 34.81144896724943
- type: nauc_ndcg_at_100_std
value: 5.269669826884739
- type: nauc_ndcg_at_10_diff1
value: 50.638035087354346
- type: nauc_ndcg_at_10_max
value: 35.548660617367744
- type: nauc_ndcg_at_10_std
value: 2.757672387486977
- type: nauc_ndcg_at_1_diff1
value: 55.97789723641661
- type: nauc_ndcg_at_1_max
value: 38.07741378971052
- type: nauc_ndcg_at_1_std
value: 3.1114912067800407
- type: nauc_ndcg_at_20_diff1
value: 50.94165876070302
- type: nauc_ndcg_at_20_max
value: 35.15720286509341
- type: nauc_ndcg_at_20_std
value: 3.1700542955934177
- type: nauc_ndcg_at_3_diff1
value: 51.6668031483535
- type: nauc_ndcg_at_3_max
value: 36.158392419704036
- type: nauc_ndcg_at_3_std
value: 1.7945130542865129
- type: nauc_ndcg_at_5_diff1
value: 51.40374511387644
- type: nauc_ndcg_at_5_max
value: 35.96747873017992
- type: nauc_ndcg_at_5_std
value: 2.4750496896017036
- type: nauc_precision_at_1000_diff1
value: -14.459395103980057
- type: nauc_precision_at_1000_max
value: 7.001254844374337
- type: nauc_precision_at_1000_std
value: 38.87250799651196
- type: nauc_precision_at_100_diff1
value: -7.015008098259738
- type: nauc_precision_at_100_max
value: 14.454169684224969
- type: nauc_precision_at_100_std
value: 40.615163341328035
- type: nauc_precision_at_10_diff1
value: 14.105573590736311
- type: nauc_precision_at_10_max
value: 27.637233565307927
- type: nauc_precision_at_10_std
value: 24.80384513569725
- type: nauc_precision_at_1_diff1
value: 55.97789723641661
- type: nauc_precision_at_1_max
value: 38.07741378971052
- type: nauc_precision_at_1_std
value: 3.1114912067800407
- type: nauc_precision_at_20_diff1
value: 6.826222425028856
- type: nauc_precision_at_20_max
value: 22.440750352931133
- type: nauc_precision_at_20_std
value: 30.650961826400664
- type: nauc_precision_at_3_diff1
value: 33.56939227622927
- type: nauc_precision_at_3_max
value: 35.81131949842977
- type: nauc_precision_at_3_std
value: 13.39631093898116
- type: nauc_precision_at_5_diff1
value: 25.327171466051347
- type: nauc_precision_at_5_max
value: 33.04313875843963
- type: nauc_precision_at_5_std
value: 19.62165639744543
- type: nauc_recall_at_1000_diff1
value: 34.60133056300212
- type: nauc_recall_at_1000_max
value: 21.161471663251515
- type: nauc_recall_at_1000_std
value: 32.74321904619018
- type: nauc_recall_at_100_diff1
value: 36.43348185795896
- type: nauc_recall_at_100_max
value: 25.704040738466205
- type: nauc_recall_at_100_std
value: 17.990567238645156
- type: nauc_recall_at_10_diff1
value: 42.694617737297676
- type: nauc_recall_at_10_max
value: 31.3298523819716
- type: nauc_recall_at_10_std
value: 2.384843550540601
- type: nauc_recall_at_1_diff1
value: 56.732243764713175
- type: nauc_recall_at_1_max
value: 30.49325212155099
- type: nauc_recall_at_1_std
value: -5.04800794470186
- type: nauc_recall_at_20_diff1
value: 43.176907776217455
- type: nauc_recall_at_20_max
value: 29.215827308916065
- type: nauc_recall_at_20_std
value: 4.147830621064018
- type: nauc_recall_at_3_diff1
value: 48.35837999847456
- type: nauc_recall_at_3_max
value: 31.92274839572281
- type: nauc_recall_at_3_std
value: -2.714807149637697
- type: nauc_recall_at_5_diff1
value: 46.351251919981635
- type: nauc_recall_at_5_max
value: 32.523267054288304
- type: nauc_recall_at_5_std
value: 0.4952928034547165
- type: ndcg_at_1
value: 40.0
- type: ndcg_at_10
value: 48.052
- type: ndcg_at_100
value: 52.07000000000001
- type: ndcg_at_1000
value: 54.064
- type: ndcg_at_20
value: 49.626
- type: ndcg_at_3
value: 43.902
- type: ndcg_at_5
value: 45.701
- type: precision_at_1
value: 40.0
- type: precision_at_10
value: 9.203999999999999
- type: precision_at_100
value: 1.438
- type: precision_at_1000
value: 0.188
- type: precision_at_20
value: 5.376
- type: precision_at_3
value: 21.295
- type: precision_at_5
value: 15.082999999999998
- type: recall_at_1
value: 31.691000000000003
- type: recall_at_10
value: 57.859
- type: recall_at_100
value: 75.107
- type: recall_at_1000
value: 87.679
- type: recall_at_20
value: 63.698
- type: recall_at_3
value: 45.379000000000005
- type: recall_at_5
value: 50.556999999999995
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackGamingRetrieval (default)
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
split: test
type: mteb/cqadupstack-gaming
metrics:
- type: main_score
value: 56.423
- type: map_at_1
value: 38.517
- type: map_at_10
value: 50.510999999999996
- type: map_at_100
value: 51.568000000000005
- type: map_at_1000
value: 51.625
- type: map_at_20
value: 51.157
- type: map_at_3
value: 46.861000000000004
- type: map_at_5
value: 49.138
- type: mrr_at_1
value: 44.32601880877743
- type: mrr_at_10
value: 54.006518385828805
- type: mrr_at_100
value: 54.71981356159521
- type: mrr_at_1000
value: 54.752151957526316
- type: mrr_at_20
value: 54.476952471748106
- type: mrr_at_3
value: 51.22257053291538
- type: mrr_at_5
value: 53.10658307210038
- type: nauc_map_at_1000_diff1
value: 46.99591363896863
- type: nauc_map_at_1000_max
value: 36.24904681003381
- type: nauc_map_at_1000_std
value: 5.266222409182784
- type: nauc_map_at_100_diff1
value: 46.98084517367969
- type: nauc_map_at_100_max
value: 36.250423743143436
- type: nauc_map_at_100_std
value: 5.28301960645914
- type: nauc_map_at_10_diff1
value: 47.06708588191781
- type: nauc_map_at_10_max
value: 36.18369863811603
- type: nauc_map_at_10_std
value: 4.540089547074973
- type: nauc_map_at_1_diff1
value: 50.48083670895566
- type: nauc_map_at_1_max
value: 31.950879226720595
- type: nauc_map_at_1_std
value: 0.8257060985358229
- type: nauc_map_at_20_diff1
value: 46.998991583288415
- type: nauc_map_at_20_max
value: 36.22115301678039
- type: nauc_map_at_20_std
value: 5.082303558564342
- type: nauc_map_at_3_diff1
value: 47.7005416643811
- type: nauc_map_at_3_max
value: 35.88865564285155
- type: nauc_map_at_3_std
value: 2.944332455222102
- type: nauc_map_at_5_diff1
value: 47.312929177575874
- type: nauc_map_at_5_max
value: 35.862390825522844
- type: nauc_map_at_5_std
value: 3.81274507266821
- type: nauc_mrr_at_1000_diff1
value: 46.6759837669438
- type: nauc_mrr_at_1000_max
value: 36.70273979969576
- type: nauc_mrr_at_1000_std
value: 5.372740994750759
- type: nauc_mrr_at_100_diff1
value: 46.675225471247536
- type: nauc_mrr_at_100_max
value: 36.703302034269875
- type: nauc_mrr_at_100_std
value: 5.389605566226372
- type: nauc_mrr_at_10_diff1
value: 46.50353044791382
- type: nauc_mrr_at_10_max
value: 36.66777833991145
- type: nauc_mrr_at_10_std
value: 5.243423563011071
- type: nauc_mrr_at_1_diff1
value: 49.02972042252377
- type: nauc_mrr_at_1_max
value: 36.600499110729764
- type: nauc_mrr_at_1_std
value: 2.5711258912407953
- type: nauc_mrr_at_20_diff1
value: 46.625296101632095
- type: nauc_mrr_at_20_max
value: 36.678578716940855
- type: nauc_mrr_at_20_std
value: 5.406361664314628
- type: nauc_mrr_at_3_diff1
value: 46.907538354326825
- type: nauc_mrr_at_3_max
value: 36.91488611173621
- type: nauc_mrr_at_3_std
value: 3.8761762810100473
- type: nauc_mrr_at_5_diff1
value: 46.774337072791255
- type: nauc_mrr_at_5_max
value: 36.65454152790335
- type: nauc_mrr_at_5_std
value: 4.753826902883721
- type: nauc_ndcg_at_1000_diff1
value: 46.312300114931396
- type: nauc_ndcg_at_1000_max
value: 36.687577969558156
- type: nauc_ndcg_at_1000_std
value: 8.04218255348285
- type: nauc_ndcg_at_100_diff1
value: 45.91371707529375
- type: nauc_ndcg_at_100_max
value: 36.72698157851723
- type: nauc_ndcg_at_100_std
value: 8.62715881456232
- type: nauc_ndcg_at_10_diff1
value: 45.70764954649013
- type: nauc_ndcg_at_10_max
value: 36.42241644937269
- type: nauc_ndcg_at_10_std
value: 6.793309697483774
- type: nauc_ndcg_at_1_diff1
value: 49.02972042252377
- type: nauc_ndcg_at_1_max
value: 36.600499110729764
- type: nauc_ndcg_at_1_std
value: 2.5711258912407953
- type: nauc_ndcg_at_20_diff1
value: 45.71253409870376
- type: nauc_ndcg_at_20_max
value: 36.478750872235075
- type: nauc_ndcg_at_20_std
value: 8.032852116533649
- type: nauc_ndcg_at_3_diff1
value: 46.5055405749989
- type: nauc_ndcg_at_3_max
value: 36.55925519576953
- type: nauc_ndcg_at_3_std
value: 4.01635426914171
- type: nauc_ndcg_at_5_diff1
value: 46.17076704583506
- type: nauc_ndcg_at_5_max
value: 36.00194839608453
- type: nauc_ndcg_at_5_std
value: 5.290651961116129
- type: nauc_precision_at_1000_diff1
value: -7.810936686028834
- type: nauc_precision_at_1000_max
value: 2.4457731990668035
- type: nauc_precision_at_1000_std
value: 15.244382957052343
- type: nauc_precision_at_100_diff1
value: -6.24711281837766
- type: nauc_precision_at_100_max
value: 9.274662370763165
- type: nauc_precision_at_100_std
value: 21.156495677287772
- type: nauc_precision_at_10_diff1
value: 11.673391020454202
- type: nauc_precision_at_10_max
value: 23.642781032334476
- type: nauc_precision_at_10_std
value: 15.428694149947766
- type: nauc_precision_at_1_diff1
value: 49.02972042252377
- type: nauc_precision_at_1_max
value: 36.600499110729764
- type: nauc_precision_at_1_std
value: 2.5711258912407953
- type: nauc_precision_at_20_diff1
value: 4.320523799516288
- type: nauc_precision_at_20_max
value: 18.529188355144083
- type: nauc_precision_at_20_std
value: 20.63811919289391
- type: nauc_precision_at_3_diff1
value: 28.81527179707099
- type: nauc_precision_at_3_max
value: 34.12169505571048
- type: nauc_precision_at_3_std
value: 8.264026657534398
- type: nauc_precision_at_5_diff1
value: 20.643744683841586
- type: nauc_precision_at_5_max
value: 28.520212611799007
- type: nauc_precision_at_5_std
value: 11.159926260802324
- type: nauc_recall_at_1000_diff1
value: 47.89843496456478
- type: nauc_recall_at_1000_max
value: 48.19346950585018
- type: nauc_recall_at_1000_std
value: 69.35955862460499
- type: nauc_recall_at_100_diff1
value: 38.5657115857761
- type: nauc_recall_at_100_max
value: 39.1799100059013
- type: nauc_recall_at_100_std
value: 37.26868224318161
- type: nauc_recall_at_10_diff1
value: 39.70450871697248
- type: nauc_recall_at_10_max
value: 34.7230529664253
- type: nauc_recall_at_10_std
value: 12.967503176766982
- type: nauc_recall_at_1_diff1
value: 50.48083670895566
- type: nauc_recall_at_1_max
value: 31.950879226720595
- type: nauc_recall_at_1_std
value: 0.8257060985358229
- type: nauc_recall_at_20_diff1
value: 38.52009076825669
- type: nauc_recall_at_20_max
value: 35.067067464590004
- type: nauc_recall_at_20_std
value: 21.157205479969708
- type: nauc_recall_at_3_diff1
value: 44.359044172441294
- type: nauc_recall_at_3_max
value: 35.53948139234034
- type: nauc_recall_at_3_std
value: 3.9964883607424118
- type: nauc_recall_at_5_diff1
value: 42.071462939937625
- type: nauc_recall_at_5_max
value: 33.59544974420819
- type: nauc_recall_at_5_std
value: 7.414365501450481
- type: ndcg_at_1
value: 44.326
- type: ndcg_at_10
value: 56.423
- type: ndcg_at_100
value: 60.626999999999995
- type: ndcg_at_1000
value: 61.78
- type: ndcg_at_20
value: 58.336
- type: ndcg_at_3
value: 50.32299999999999
- type: ndcg_at_5
value: 53.808
- type: precision_at_1
value: 44.326
- type: precision_at_10
value: 9.21
- type: precision_at_100
value: 1.2189999999999999
- type: precision_at_1000
value: 0.136
- type: precision_at_20
value: 5.176
- type: precision_at_3
value: 22.487
- type: precision_at_5
value: 15.9
- type: recall_at_1
value: 38.517
- type: recall_at_10
value: 70.291
- type: recall_at_100
value: 88.53999999999999
- type: recall_at_1000
value: 96.67
- type: recall_at_20
value: 77.459
- type: recall_at_3
value: 54.44
- type: recall_at_5
value: 62.863
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackGisRetrieval (default)
revision: 5003b3064772da1887988e05400cf3806fe491f2
split: test
type: mteb/cqadupstack-gis
metrics:
- type: main_score
value: 40.245999999999995
- type: map_at_1
value: 25.66
- type: map_at_10
value: 34.781
- type: map_at_100
value: 35.825
- type: map_at_1000
value: 35.900999999999996
- type: map_at_20
value: 35.312
- type: map_at_3
value: 31.649
- type: map_at_5
value: 33.446
- type: mrr_at_1
value: 27.683615819209038
- type: mrr_at_10
value: 36.88897856694467
- type: mrr_at_100
value: 37.7983608873635
- type: mrr_at_1000
value: 37.85659419024201
- type: mrr_at_20
value: 37.40279188480636
- type: mrr_at_3
value: 33.97363465160076
- type: mrr_at_5
value: 35.76459510357815
- type: nauc_map_at_1000_diff1
value: 39.85966393937221
- type: nauc_map_at_1000_max
value: 27.627327546922082
- type: nauc_map_at_1000_std
value: -2.437000048637541
- type: nauc_map_at_100_diff1
value: 39.84937090664403
- type: nauc_map_at_100_max
value: 27.637564346944988
- type: nauc_map_at_100_std
value: -2.4109683806023408
- type: nauc_map_at_10_diff1
value: 39.71424425034042
- type: nauc_map_at_10_max
value: 27.872378136740437
- type: nauc_map_at_10_std
value: -2.8569524387609566
- type: nauc_map_at_1_diff1
value: 45.91775607893774
- type: nauc_map_at_1_max
value: 26.899324806364007
- type: nauc_map_at_1_std
value: -5.498609993557515
- type: nauc_map_at_20_diff1
value: 39.883943198146106
- type: nauc_map_at_20_max
value: 27.64309227085422
- type: nauc_map_at_20_std
value: -2.5654741454169816
- type: nauc_map_at_3_diff1
value: 39.91753278618007
- type: nauc_map_at_3_max
value: 27.11865653999877
- type: nauc_map_at_3_std
value: -3.3286492180678384
- type: nauc_map_at_5_diff1
value: 39.6313699695734
- type: nauc_map_at_5_max
value: 27.710946419917548
- type: nauc_map_at_5_std
value: -2.920297786058066
- type: nauc_mrr_at_1000_diff1
value: 39.690653898179
- type: nauc_mrr_at_1000_max
value: 27.18398591982711
- type: nauc_mrr_at_1000_std
value: -2.606447174750376
- type: nauc_mrr_at_100_diff1
value: 39.689803477387656
- type: nauc_mrr_at_100_max
value: 27.189479576677762
- type: nauc_mrr_at_100_std
value: -2.570807442132712
- type: nauc_mrr_at_10_diff1
value: 39.399614568431915
- type: nauc_mrr_at_10_max
value: 27.304654766506253
- type: nauc_mrr_at_10_std
value: -2.8847962104122584
- type: nauc_mrr_at_1_diff1
value: 45.70161189197341
- type: nauc_mrr_at_1_max
value: 27.02826003278829
- type: nauc_mrr_at_1_std
value: -4.831200831009949
- type: nauc_mrr_at_20_diff1
value: 39.69394763509078
- type: nauc_mrr_at_20_max
value: 27.201336203029232
- type: nauc_mrr_at_20_std
value: -2.6871497640498765
- type: nauc_mrr_at_3_diff1
value: 39.220307350990346
- type: nauc_mrr_at_3_max
value: 26.7053856409676
- type: nauc_mrr_at_3_std
value: -3.2176631206275514
- type: nauc_mrr_at_5_diff1
value: 39.166108393948406
- type: nauc_mrr_at_5_max
value: 27.084050550858557
- type: nauc_mrr_at_5_std
value: -2.87556996749801
- type: nauc_ndcg_at_1000_diff1
value: 38.603857523266925
- type: nauc_ndcg_at_1000_max
value: 27.45135486355824
- type: nauc_ndcg_at_1000_std
value: -0.46660995944134603
- type: nauc_ndcg_at_100_diff1
value: 38.444207274649884
- type: nauc_ndcg_at_100_max
value: 27.549884957721194
- type: nauc_ndcg_at_100_std
value: 0.47388375830707924
- type: nauc_ndcg_at_10_diff1
value: 37.72567187058473
- type: nauc_ndcg_at_10_max
value: 28.44081574137556
- type: nauc_ndcg_at_10_std
value: -1.8534359145108148
- type: nauc_ndcg_at_1_diff1
value: 45.70161189197341
- type: nauc_ndcg_at_1_max
value: 27.02826003278829
- type: nauc_ndcg_at_1_std
value: -4.831200831009949
- type: nauc_ndcg_at_20_diff1
value: 38.44184854108953
- type: nauc_ndcg_at_20_max
value: 27.679973388870614
- type: nauc_ndcg_at_20_std
value: -0.898582155647988
- type: nauc_ndcg_at_3_diff1
value: 37.97088409897179
- type: nauc_ndcg_at_3_max
value: 27.106412295185066
- type: nauc_ndcg_at_3_std
value: -2.730164275362466
- type: nauc_ndcg_at_5_diff1
value: 37.37607068800825
- type: nauc_ndcg_at_5_max
value: 27.9502784140078
- type: nauc_ndcg_at_5_std
value: -2.0027830470055075
- type: nauc_precision_at_1000_diff1
value: 0.5286110453963512
- type: nauc_precision_at_1000_max
value: -2.3318515785442813
- type: nauc_precision_at_1000_std
value: 7.80079288314789
- type: nauc_precision_at_100_diff1
value: 13.667186642269913
- type: nauc_precision_at_100_max
value: 9.942092016059734
- type: nauc_precision_at_100_std
value: 12.50332782268112
- type: nauc_precision_at_10_diff1
value: 26.281496960169953
- type: nauc_precision_at_10_max
value: 24.46085080936575
- type: nauc_precision_at_10_std
value: 2.8074535999287322
- type: nauc_precision_at_1_diff1
value: 45.70161189197341
- type: nauc_precision_at_1_max
value: 27.02826003278829
- type: nauc_precision_at_1_std
value: -4.831200831009949
- type: nauc_precision_at_20_diff1
value: 25.585868175418412
- type: nauc_precision_at_20_max
value: 19.640567118702023
- type: nauc_precision_at_20_std
value: 7.0865072321039
- type: nauc_precision_at_3_diff1
value: 31.522547430107718
- type: nauc_precision_at_3_max
value: 25.87424549883876
- type: nauc_precision_at_3_std
value: -0.6508524960745287
- type: nauc_precision_at_5_diff1
value: 28.958347089826553
- type: nauc_precision_at_5_max
value: 26.541109281414073
- type: nauc_precision_at_5_std
value: 1.8354704960749444
- type: nauc_recall_at_1000_diff1
value: 25.74128427270277
- type: nauc_recall_at_1000_max
value: 21.011729073123906
- type: nauc_recall_at_1000_std
value: 29.766333163064136
- type: nauc_recall_at_100_diff1
value: 31.785700068938166
- type: nauc_recall_at_100_max
value: 25.476332277500607
- type: nauc_recall_at_100_std
value: 20.47758699126873
- type: nauc_recall_at_10_diff1
value: 31.186789594770264
- type: nauc_recall_at_10_max
value: 30.23366916255125
- type: nauc_recall_at_10_std
value: 1.0690146258142572
- type: nauc_recall_at_1_diff1
value: 45.91775607893774
- type: nauc_recall_at_1_max
value: 26.899324806364007
- type: nauc_recall_at_1_std
value: -5.498609993557515
- type: nauc_recall_at_20_diff1
value: 33.32210083840443
- type: nauc_recall_at_20_max
value: 26.910239736720104
- type: nauc_recall_at_20_std
value: 5.087368762147268
- type: nauc_recall_at_3_diff1
value: 32.3606502852846
- type: nauc_recall_at_3_max
value: 26.86643484335275
- type: nauc_recall_at_3_std
value: -0.9468851994313872
- type: nauc_recall_at_5_diff1
value: 30.58200958021165
- type: nauc_recall_at_5_max
value: 28.81049824914163
- type: nauc_recall_at_5_std
value: 0.40032324122162105
- type: ndcg_at_1
value: 27.683999999999997
- type: ndcg_at_10
value: 40.245999999999995
- type: ndcg_at_100
value: 45.506
- type: ndcg_at_1000
value: 47.461999999999996
- type: ndcg_at_20
value: 42.122
- type: ndcg_at_3
value: 34.209
- type: ndcg_at_5
value: 37.279
- type: precision_at_1
value: 27.683999999999997
- type: precision_at_10
value: 6.3839999999999995
- type: precision_at_100
value: 0.955
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_20
value: 3.644
- type: precision_at_3
value: 14.539
- type: precision_at_5
value: 10.576
- type: recall_at_1
value: 25.66
- type: recall_at_10
value: 55.062999999999995
- type: recall_at_100
value: 79.38199999999999
- type: recall_at_1000
value: 94.233
- type: recall_at_20
value: 62.082
- type: recall_at_3
value: 39.078
- type: recall_at_5
value: 46.236
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackMathematicaRetrieval (default)
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
split: test
type: mteb/cqadupstack-mathematica
metrics:
- type: main_score
value: 33.599000000000004
- type: map_at_1
value: 17.586
- type: map_at_10
value: 27.32
- type: map_at_100
value: 28.799999999999997
- type: map_at_1000
value: 28.921000000000003
- type: map_at_20
value: 28.153
- type: map_at_3
value: 24.066000000000003
- type: map_at_5
value: 25.755
- type: mrr_at_1
value: 22.263681592039802
- type: mrr_at_10
value: 32.360469478006785
- type: mrr_at_100
value: 33.438437513063704
- type: mrr_at_1000
value: 33.497473884762094
- type: mrr_at_20
value: 33.04969965022066
- type: mrr_at_3
value: 29.415422885572156
- type: mrr_at_5
value: 30.99502487562189
- type: nauc_map_at_1000_diff1
value: 27.265553741965775
- type: nauc_map_at_1000_max
value: 19.288555820766756
- type: nauc_map_at_1000_std
value: 1.7933416168321978
- type: nauc_map_at_100_diff1
value: 27.260040534250695
- type: nauc_map_at_100_max
value: 19.304892141398717
- type: nauc_map_at_100_std
value: 1.8491919829209627
- type: nauc_map_at_10_diff1
value: 26.86944291051016
- type: nauc_map_at_10_max
value: 18.92320522759212
- type: nauc_map_at_10_std
value: 0.889749881009448
- type: nauc_map_at_1_diff1
value: 30.584243017075806
- type: nauc_map_at_1_max
value: 13.491468441422066
- type: nauc_map_at_1_std
value: -0.8751763698025199
- type: nauc_map_at_20_diff1
value: 27.227733914801732
- type: nauc_map_at_20_max
value: 19.278767798642207
- type: nauc_map_at_20_std
value: 1.4312898630264221
- type: nauc_map_at_3_diff1
value: 26.919576048874767
- type: nauc_map_at_3_max
value: 18.312759768115967
- type: nauc_map_at_3_std
value: -0.5642361688764358
- type: nauc_map_at_5_diff1
value: 27.04032364592226
- type: nauc_map_at_5_max
value: 19.191923558129698
- type: nauc_map_at_5_std
value: 0.14080066912052358
- type: nauc_mrr_at_1000_diff1
value: 27.136068664109448
- type: nauc_mrr_at_1000_max
value: 22.022262336934386
- type: nauc_mrr_at_1000_std
value: 3.3308260159907976
- type: nauc_mrr_at_100_diff1
value: 27.147288894737333
- type: nauc_mrr_at_100_max
value: 22.02852436815082
- type: nauc_mrr_at_100_std
value: 3.3550379360464526
- type: nauc_mrr_at_10_diff1
value: 26.79942635668937
- type: nauc_mrr_at_10_max
value: 22.030637334814642
- type: nauc_mrr_at_10_std
value: 2.867852159546408
- type: nauc_mrr_at_1_diff1
value: 29.595744930714023
- type: nauc_mrr_at_1_max
value: 17.736581194275356
- type: nauc_mrr_at_1_std
value: 0.2159541136892455
- type: nauc_mrr_at_20_diff1
value: 27.176010332894013
- type: nauc_mrr_at_20_max
value: 22.13536761286141
- type: nauc_mrr_at_20_std
value: 3.237439208098252
- type: nauc_mrr_at_3_diff1
value: 26.57000851252062
- type: nauc_mrr_at_3_max
value: 21.747583860129698
- type: nauc_mrr_at_3_std
value: 1.721057838979949
- type: nauc_mrr_at_5_diff1
value: 26.92551416387028
- type: nauc_mrr_at_5_max
value: 22.42993672746205
- type: nauc_mrr_at_5_std
value: 2.725843108347625
- type: nauc_ndcg_at_1000_diff1
value: 27.46739757065543
- type: nauc_ndcg_at_1000_max
value: 21.041702596702677
- type: nauc_ndcg_at_1000_std
value: 5.604780462883483
- type: nauc_ndcg_at_100_diff1
value: 27.652630070854155
- type: nauc_ndcg_at_100_max
value: 21.81166185983459
- type: nauc_ndcg_at_100_std
value: 6.698607031446962
- type: nauc_ndcg_at_10_diff1
value: 26.00697734505188
- type: nauc_ndcg_at_10_max
value: 20.828161505269204
- type: nauc_ndcg_at_10_std
value: 2.8399382855194033
- type: nauc_ndcg_at_1_diff1
value: 29.595744930714023
- type: nauc_ndcg_at_1_max
value: 17.736581194275356
- type: nauc_ndcg_at_1_std
value: 0.2159541136892455
- type: nauc_ndcg_at_20_diff1
value: 27.27378051779869
- type: nauc_ndcg_at_20_max
value: 21.736204369394024
- type: nauc_ndcg_at_20_std
value: 4.739094883714155
- type: nauc_ndcg_at_3_diff1
value: 26.57231894661191
- type: nauc_ndcg_at_3_max
value: 20.93227880070676
- type: nauc_ndcg_at_3_std
value: 0.024589831513874137
- type: nauc_ndcg_at_5_diff1
value: 26.600828085337064
- type: nauc_ndcg_at_5_max
value: 21.773794661183416
- type: nauc_ndcg_at_5_std
value: 1.5522574657313302
- type: nauc_precision_at_1000_diff1
value: 3.4210541212862537
- type: nauc_precision_at_1000_max
value: 3.102103455114947
- type: nauc_precision_at_1000_std
value: 1.7521716451583618
- type: nauc_precision_at_100_diff1
value: 11.443300353934575
- type: nauc_precision_at_100_max
value: 14.660009751798997
- type: nauc_precision_at_100_std
value: 12.668177644524992
- type: nauc_precision_at_10_diff1
value: 17.394001289019975
- type: nauc_precision_at_10_max
value: 22.223278134383104
- type: nauc_precision_at_10_std
value: 7.242926879010027
- type: nauc_precision_at_1_diff1
value: 29.595744930714023
- type: nauc_precision_at_1_max
value: 17.736581194275356
- type: nauc_precision_at_1_std
value: 0.2159541136892455
- type: nauc_precision_at_20_diff1
value: 17.43115026349507
- type: nauc_precision_at_20_max
value: 21.47538261589186
- type: nauc_precision_at_20_std
value: 10.237040595580279
- type: nauc_precision_at_3_diff1
value: 22.012366289647648
- type: nauc_precision_at_3_max
value: 25.106312117807487
- type: nauc_precision_at_3_std
value: 1.9995028727881818
- type: nauc_precision_at_5_diff1
value: 20.398546387324117
- type: nauc_precision_at_5_max
value: 26.303228187054806
- type: nauc_precision_at_5_std
value: 5.564748189759881
- type: nauc_recall_at_1000_diff1
value: 29.03481056576388
- type: nauc_recall_at_1000_max
value: 17.81464147740126
- type: nauc_recall_at_1000_std
value: 52.084053180233646
- type: nauc_recall_at_100_diff1
value: 28.23982991718224
- type: nauc_recall_at_100_max
value: 26.168366200103815
- type: nauc_recall_at_100_std
value: 28.36050476271469
- type: nauc_recall_at_10_diff1
value: 21.64818157792201
- type: nauc_recall_at_10_max
value: 20.853972890132304
- type: nauc_recall_at_10_std
value: 5.713144094583624
- type: nauc_recall_at_1_diff1
value: 30.584243017075806
- type: nauc_recall_at_1_max
value: 13.491468441422066
- type: nauc_recall_at_1_std
value: -0.8751763698025199
- type: nauc_recall_at_20_diff1
value: 25.370812868482425
- type: nauc_recall_at_20_max
value: 23.485918438346335
- type: nauc_recall_at_20_std
value: 13.06270351478354
- type: nauc_recall_at_3_diff1
value: 23.22354479137504
- type: nauc_recall_at_3_max
value: 21.931741628585574
- type: nauc_recall_at_3_std
value: 0.22215343527463874
- type: nauc_recall_at_5_diff1
value: 23.762779317387583
- type: nauc_recall_at_5_max
value: 23.86601516024228
- type: nauc_recall_at_5_std
value: 2.9938661959173722
- type: ndcg_at_1
value: 22.264
- type: ndcg_at_10
value: 33.599000000000004
- type: ndcg_at_100
value: 40.149
- type: ndcg_at_1000
value: 42.663000000000004
- type: ndcg_at_20
value: 36.329
- type: ndcg_at_3
value: 27.736
- type: ndcg_at_5
value: 30.219
- type: precision_at_1
value: 22.264
- type: precision_at_10
value: 6.542000000000001
- type: precision_at_100
value: 1.1320000000000001
- type: precision_at_1000
value: 0.147
- type: precision_at_20
value: 4.061
- type: precision_at_3
value: 14.013
- type: precision_at_5
value: 10.274
- type: recall_at_1
value: 17.586
- type: recall_at_10
value: 47.932
- type: recall_at_100
value: 75.958
- type: recall_at_1000
value: 93.512
- type: recall_at_20
value: 57.708999999999996
- type: recall_at_3
value: 31.46
- type: recall_at_5
value: 37.842
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackPhysicsRetrieval (default)
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
split: test
type: mteb/cqadupstack-physics
metrics:
- type: main_score
value: 47.410000000000004
- type: map_at_1
value: 28.971999999999998
- type: map_at_10
value: 40.96
- type: map_at_100
value: 42.331
- type: map_at_1000
value: 42.441
- type: map_at_20
value: 41.742000000000004
- type: map_at_3
value: 37.393
- type: map_at_5
value: 39.407
- type: mrr_at_1
value: 36.092396535129936
- type: mrr_at_10
value: 46.3360297599951
- type: mrr_at_100
value: 47.12743931915083
- type: mrr_at_1000
value: 47.17149558717527
- type: mrr_at_20
value: 46.764900340591545
- type: mrr_at_3
value: 43.50336862367658
- type: mrr_at_5
value: 45.048123195380114
- type: nauc_map_at_1000_diff1
value: 48.875356927744015
- type: nauc_map_at_1000_max
value: 26.846314785374048
- type: nauc_map_at_1000_std
value: 0.23720537516106452
- type: nauc_map_at_100_diff1
value: 48.82990495193183
- type: nauc_map_at_100_max
value: 26.843711103433947
- type: nauc_map_at_100_std
value: 0.25686095628081784
- type: nauc_map_at_10_diff1
value: 48.72231653223161
- type: nauc_map_at_10_max
value: 26.364353126291522
- type: nauc_map_at_10_std
value: 0.01100727529750763
- type: nauc_map_at_1_diff1
value: 51.96344574112589
- type: nauc_map_at_1_max
value: 27.671021546156044
- type: nauc_map_at_1_std
value: -4.6808708326389805
- type: nauc_map_at_20_diff1
value: 48.870849394709346
- type: nauc_map_at_20_max
value: 26.813670876224883
- type: nauc_map_at_20_std
value: 0.2352693253381299
- type: nauc_map_at_3_diff1
value: 49.529072015100326
- type: nauc_map_at_3_max
value: 27.77400144483059
- type: nauc_map_at_3_std
value: -0.6453151987449416
- type: nauc_map_at_5_diff1
value: 49.20710807541119
- type: nauc_map_at_5_max
value: 27.177488493074755
- type: nauc_map_at_5_std
value: -0.25587902411032826
- type: nauc_mrr_at_1000_diff1
value: 48.498262710122425
- type: nauc_mrr_at_1000_max
value: 26.11751051811526
- type: nauc_mrr_at_1000_std
value: -0.7728285987105216
- type: nauc_mrr_at_100_diff1
value: 48.48746660434456
- type: nauc_mrr_at_100_max
value: 26.115163451470647
- type: nauc_mrr_at_100_std
value: -0.7480131276402198
- type: nauc_mrr_at_10_diff1
value: 48.43136858217138
- type: nauc_mrr_at_10_max
value: 25.834024688307604
- type: nauc_mrr_at_10_std
value: -0.9430552221216183
- type: nauc_mrr_at_1_diff1
value: 50.088598533173354
- type: nauc_mrr_at_1_max
value: 27.648802533446197
- type: nauc_mrr_at_1_std
value: -3.7628727544097984
- type: nauc_mrr_at_20_diff1
value: 48.473967578999215
- type: nauc_mrr_at_20_max
value: 26.091998126081734
- type: nauc_mrr_at_20_std
value: -0.7681300813435199
- type: nauc_mrr_at_3_diff1
value: 48.69610564249302
- type: nauc_mrr_at_3_max
value: 27.373923497327624
- type: nauc_mrr_at_3_std
value: -1.2747465922726908
- type: nauc_mrr_at_5_diff1
value: 48.53658899050662
- type: nauc_mrr_at_5_max
value: 26.49833197267966
- type: nauc_mrr_at_5_std
value: -0.8503446744063664
- type: nauc_ndcg_at_1000_diff1
value: 48.467870789955406
- type: nauc_ndcg_at_1000_max
value: 26.04777255889547
- type: nauc_ndcg_at_1000_std
value: 1.6645313343373058
- type: nauc_ndcg_at_100_diff1
value: 47.80533775872007
- type: nauc_ndcg_at_100_max
value: 26.106122630999174
- type: nauc_ndcg_at_100_std
value: 2.456751351490524
- type: nauc_ndcg_at_10_diff1
value: 47.57301034996511
- type: nauc_ndcg_at_10_max
value: 24.379146216030552
- type: nauc_ndcg_at_10_std
value: 1.2579497129670234
- type: nauc_ndcg_at_1_diff1
value: 50.088598533173354
- type: nauc_ndcg_at_1_max
value: 27.648802533446197
- type: nauc_ndcg_at_1_std
value: -3.7628727544097984
- type: nauc_ndcg_at_20_diff1
value: 47.87138595331042
- type: nauc_ndcg_at_20_max
value: 25.648148427942452
- type: nauc_ndcg_at_20_std
value: 2.1415614628731148
- type: nauc_ndcg_at_3_diff1
value: 48.40186907831459
- type: nauc_ndcg_at_3_max
value: 27.015191238802633
- type: nauc_ndcg_at_3_std
value: -0.28368565093265813
- type: nauc_ndcg_at_5_diff1
value: 48.43525178181797
- type: nauc_ndcg_at_5_max
value: 26.033136810207125
- type: nauc_ndcg_at_5_std
value: 0.5903319782637264
- type: nauc_precision_at_1000_diff1
value: -5.050204072247187
- type: nauc_precision_at_1000_max
value: -1.706061543844424
- type: nauc_precision_at_1000_std
value: 0.4935798158915392
- type: nauc_precision_at_100_diff1
value: 1.581628126436549
- type: nauc_precision_at_100_max
value: 5.131864973231214
- type: nauc_precision_at_100_std
value: 5.818785250601078
- type: nauc_precision_at_10_diff1
value: 17.826909304567316
- type: nauc_precision_at_10_max
value: 10.047556755952215
- type: nauc_precision_at_10_std
value: 5.828288769562702
- type: nauc_precision_at_1_diff1
value: 50.088598533173354
- type: nauc_precision_at_1_max
value: 27.648802533446197
- type: nauc_precision_at_1_std
value: -3.7628727544097984
- type: nauc_precision_at_20_diff1
value: 12.647456163352691
- type: nauc_precision_at_20_max
value: 10.821622040896782
- type: nauc_precision_at_20_std
value: 6.6782471423372405
- type: nauc_precision_at_3_diff1
value: 33.03366844205296
- type: nauc_precision_at_3_max
value: 21.61654824915879
- type: nauc_precision_at_3_std
value: 3.1117767791018403
- type: nauc_precision_at_5_diff1
value: 25.873738881952193
- type: nauc_precision_at_5_max
value: 16.50897302333537
- type: nauc_precision_at_5_std
value: 4.306391187216285
- type: nauc_recall_at_1000_diff1
value: 46.920916880807226
- type: nauc_recall_at_1000_max
value: 18.93033931407027
- type: nauc_recall_at_1000_std
value: 30.343625789039912
- type: nauc_recall_at_100_diff1
value: 36.99917690641126
- type: nauc_recall_at_100_max
value: 21.9225154657857
- type: nauc_recall_at_100_std
value: 20.18252525903621
- type: nauc_recall_at_10_diff1
value: 40.849017544403544
- type: nauc_recall_at_10_max
value: 15.573050231627782
- type: nauc_recall_at_10_std
value: 6.199240253446229
- type: nauc_recall_at_1_diff1
value: 51.96344574112589
- type: nauc_recall_at_1_max
value: 27.671021546156044
- type: nauc_recall_at_1_std
value: -4.6808708326389805
- type: nauc_recall_at_20_diff1
value: 41.15264820688897
- type: nauc_recall_at_20_max
value: 19.50230922026062
- type: nauc_recall_at_20_std
value: 11.139703256952268
- type: nauc_recall_at_3_diff1
value: 45.76731873825665
- type: nauc_recall_at_3_max
value: 24.89502530374308
- type: nauc_recall_at_3_std
value: 1.8833756018456458
- type: nauc_recall_at_5_diff1
value: 44.65491098304952
- type: nauc_recall_at_5_max
value: 22.218813760031296
- type: nauc_recall_at_5_std
value: 3.985541104014005
- type: ndcg_at_1
value: 36.092
- type: ndcg_at_10
value: 47.410000000000004
- type: ndcg_at_100
value: 52.829
- type: ndcg_at_1000
value: 54.736
- type: ndcg_at_20
value: 49.563
- type: ndcg_at_3
value: 41.724
- type: ndcg_at_5
value: 44.358
- type: precision_at_1
value: 36.092
- type: precision_at_10
value: 8.807
- type: precision_at_100
value: 1.336
- type: precision_at_1000
value: 0.166
- type: precision_at_20
value: 5.140000000000001
- type: precision_at_3
value: 20.244
- type: precision_at_5
value: 14.418000000000001
- type: recall_at_1
value: 28.971999999999998
- type: recall_at_10
value: 61.160000000000004
- type: recall_at_100
value: 83.60600000000001
- type: recall_at_1000
value: 95.696
- type: recall_at_20
value: 68.569
- type: recall_at_3
value: 45.269
- type: recall_at_5
value: 52.168000000000006
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackProgrammersRetrieval (default)
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
split: test
type: mteb/cqadupstack-programmers
metrics:
- type: main_score
value: 47.107
- type: map_at_1
value: 29.509999999999998
- type: map_at_10
value: 40.872
- type: map_at_100
value: 42.349
- type: map_at_1000
value: 42.441
- type: map_at_20
value: 41.743
- type: map_at_3
value: 37.174
- type: map_at_5
value: 39.232
- type: mrr_at_1
value: 36.41552511415525
- type: mrr_at_10
value: 46.10585634558234
- type: mrr_at_100
value: 47.04388507378313
- type: mrr_at_1000
value: 47.085192151800705
- type: mrr_at_20
value: 46.71053512338389
- type: mrr_at_3
value: 43.45509893455097
- type: mrr_at_5
value: 44.95624048706236
- type: nauc_map_at_1000_diff1
value: 47.17063593584487
- type: nauc_map_at_1000_max
value: 35.6595416622568
- type: nauc_map_at_1000_std
value: 1.882360177794315
- type: nauc_map_at_100_diff1
value: 47.141224266698956
- type: nauc_map_at_100_max
value: 35.64890271359889
- type: nauc_map_at_100_std
value: 1.909040104973397
- type: nauc_map_at_10_diff1
value: 47.0080172506447
- type: nauc_map_at_10_max
value: 35.6271740598076
- type: nauc_map_at_10_std
value: 1.5963064045936786
- type: nauc_map_at_1_diff1
value: 50.45344698710353
- type: nauc_map_at_1_max
value: 34.1407536673108
- type: nauc_map_at_1_std
value: -2.503693156800745
- type: nauc_map_at_20_diff1
value: 47.198265744193066
- type: nauc_map_at_20_max
value: 35.59983959295096
- type: nauc_map_at_20_std
value: 1.709629193868128
- type: nauc_map_at_3_diff1
value: 47.86035115628325
- type: nauc_map_at_3_max
value: 33.91453079017758
- type: nauc_map_at_3_std
value: -1.0125268264345189
- type: nauc_map_at_5_diff1
value: 47.57075430825601
- type: nauc_map_at_5_max
value: 35.340050213538674
- type: nauc_map_at_5_std
value: 0.565360701196888
- type: nauc_mrr_at_1000_diff1
value: 46.19502136847612
- type: nauc_mrr_at_1000_max
value: 36.22787621665649
- type: nauc_mrr_at_1000_std
value: 0.871072004307322
- type: nauc_mrr_at_100_diff1
value: 46.18202150096684
- type: nauc_mrr_at_100_max
value: 36.2180237985802
- type: nauc_mrr_at_100_std
value: 0.9124059695477915
- type: nauc_mrr_at_10_diff1
value: 46.016490051238904
- type: nauc_mrr_at_10_max
value: 36.19342604363148
- type: nauc_mrr_at_10_std
value: 0.9071792646788923
- type: nauc_mrr_at_1_diff1
value: 50.04822644213264
- type: nauc_mrr_at_1_max
value: 38.40049220874411
- type: nauc_mrr_at_1_std
value: -0.4331805170196953
- type: nauc_mrr_at_20_diff1
value: 46.154472362472056
- type: nauc_mrr_at_20_max
value: 36.21027910317236
- type: nauc_mrr_at_20_std
value: 0.7953830560986073
- type: nauc_mrr_at_3_diff1
value: 46.69193692769359
- type: nauc_mrr_at_3_max
value: 36.09347122586123
- type: nauc_mrr_at_3_std
value: -0.8314592280863028
- type: nauc_mrr_at_5_diff1
value: 46.36247573613005
- type: nauc_mrr_at_5_max
value: 36.1332024555296
- type: nauc_mrr_at_5_std
value: 0.08254138511110683
- type: nauc_ndcg_at_1000_diff1
value: 45.502836278293714
- type: nauc_ndcg_at_1000_max
value: 35.46858202686828
- type: nauc_ndcg_at_1000_std
value: 4.220566466316345
- type: nauc_ndcg_at_100_diff1
value: 44.97146510067551
- type: nauc_ndcg_at_100_max
value: 35.20514680813267
- type: nauc_ndcg_at_100_std
value: 5.3327590512159295
- type: nauc_ndcg_at_10_diff1
value: 44.77893725971796
- type: nauc_ndcg_at_10_max
value: 35.30984188181181
- type: nauc_ndcg_at_10_std
value: 3.643838626739208
- type: nauc_ndcg_at_1_diff1
value: 50.04822644213264
- type: nauc_ndcg_at_1_max
value: 38.40049220874411
- type: nauc_ndcg_at_1_std
value: -0.4331805170196953
- type: nauc_ndcg_at_20_diff1
value: 45.347579096264255
- type: nauc_ndcg_at_20_max
value: 35.23900153649932
- type: nauc_ndcg_at_20_std
value: 3.870932080127777
- type: nauc_ndcg_at_3_diff1
value: 45.73489028100815
- type: nauc_ndcg_at_3_max
value: 33.456282441683534
- type: nauc_ndcg_at_3_std
value: -0.4316489511717149
- type: nauc_ndcg_at_5_diff1
value: 45.64448042343172
- type: nauc_ndcg_at_5_max
value: 34.82550522784654
- type: nauc_ndcg_at_5_std
value: 1.625202909591719
- type: nauc_precision_at_1000_diff1
value: -11.082584414320458
- type: nauc_precision_at_1000_max
value: -0.10525239966679063
- type: nauc_precision_at_1000_std
value: 1.2049688164002124
- type: nauc_precision_at_100_diff1
value: -4.401663460913719
- type: nauc_precision_at_100_max
value: 6.217580097767219
- type: nauc_precision_at_100_std
value: 11.507170914733113
- type: nauc_precision_at_10_diff1
value: 15.316762589026817
- type: nauc_precision_at_10_max
value: 24.094651080086884
- type: nauc_precision_at_10_std
value: 14.997661405160551
- type: nauc_precision_at_1_diff1
value: 50.04822644213264
- type: nauc_precision_at_1_max
value: 38.40049220874411
- type: nauc_precision_at_1_std
value: -0.4331805170196953
- type: nauc_precision_at_20_diff1
value: 9.71755375786461
- type: nauc_precision_at_20_max
value: 17.50245364945517
- type: nauc_precision_at_20_std
value: 13.42442276093188
- type: nauc_precision_at_3_diff1
value: 33.92303910717078
- type: nauc_precision_at_3_max
value: 31.577604822025844
- type: nauc_precision_at_3_std
value: 4.225871813818534
- type: nauc_precision_at_5_diff1
value: 26.434077412071776
- type: nauc_precision_at_5_max
value: 30.415493182198862
- type: nauc_precision_at_5_std
value: 9.962587204978579
- type: nauc_recall_at_1000_diff1
value: 19.583141827294416
- type: nauc_recall_at_1000_max
value: 25.331531875118163
- type: nauc_recall_at_1000_std
value: 47.19745406634415
- type: nauc_recall_at_100_diff1
value: 28.38177952031043
- type: nauc_recall_at_100_max
value: 27.04348472020136
- type: nauc_recall_at_100_std
value: 32.64978369730068
- type: nauc_recall_at_10_diff1
value: 36.77645976843529
- type: nauc_recall_at_10_max
value: 31.508362325677286
- type: nauc_recall_at_10_std
value: 9.845183301924783
- type: nauc_recall_at_1_diff1
value: 50.45344698710353
- type: nauc_recall_at_1_max
value: 34.1407536673108
- type: nauc_recall_at_1_std
value: -2.503693156800745
- type: nauc_recall_at_20_diff1
value: 37.1245830532323
- type: nauc_recall_at_20_max
value: 30.01404898730656
- type: nauc_recall_at_20_std
value: 11.991031997571183
- type: nauc_recall_at_3_diff1
value: 41.50397374838714
- type: nauc_recall_at_3_max
value: 28.605530200805894
- type: nauc_recall_at_3_std
value: -0.2718652433235268
- type: nauc_recall_at_5_diff1
value: 39.85347018437693
- type: nauc_recall_at_5_max
value: 30.8839592452558
- type: nauc_recall_at_5_std
value: 4.6501737002456505
- type: ndcg_at_1
value: 36.416
- type: ndcg_at_10
value: 47.107
- type: ndcg_at_100
value: 52.998999999999995
- type: ndcg_at_1000
value: 54.647
- type: ndcg_at_20
value: 49.748
- type: ndcg_at_3
value: 41.555
- type: ndcg_at_5
value: 44.079
- type: precision_at_1
value: 36.416
- type: precision_at_10
value: 8.870000000000001
- type: precision_at_100
value: 1.381
- type: precision_at_1000
value: 0.168
- type: precision_at_20
value: 5.303
- type: precision_at_3
value: 19.901
- type: precision_at_5
value: 14.292
- type: recall_at_1
value: 29.509999999999998
- type: recall_at_10
value: 60.169
- type: recall_at_100
value: 84.745
- type: recall_at_1000
value: 95.515
- type: recall_at_20
value: 69.571
- type: recall_at_3
value: 44.751000000000005
- type: recall_at_5
value: 51.675000000000004
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackRetrieval (default)
revision: CQADupstackRetrieval_is_a_combined_dataset
split: test
type: CQADupstackRetrieval_is_a_combined_dataset
metrics:
- type: main_score
value: 42.74816666666667
- type: ndcg_at_10
value: 42.74816666666667
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackStatsRetrieval (default)
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
split: test
type: mteb/cqadupstack-stats
metrics:
- type: main_score
value: 38.574999999999996
- type: map_at_1
value: 26.184
- type: map_at_10
value: 33.796
- type: map_at_100
value: 34.942
- type: map_at_1000
value: 35.027
- type: map_at_20
value: 34.400999999999996
- type: map_at_3
value: 31.247000000000003
- type: map_at_5
value: 32.618
- type: mrr_at_1
value: 29.447852760736197
- type: mrr_at_10
value: 36.7572913623527
- type: mrr_at_100
value: 37.75134813088362
- type: mrr_at_1000
value: 37.80936473092716
- type: mrr_at_20
value: 37.32794705157665
- type: mrr_at_3
value: 34.50920245398773
- type: mrr_at_5
value: 35.70552147239264
- type: nauc_map_at_1000_diff1
value: 41.03705943910151
- type: nauc_map_at_1000_max
value: 31.212034574788223
- type: nauc_map_at_1000_std
value: 5.0860482737872585
- type: nauc_map_at_100_diff1
value: 41.036679724384975
- type: nauc_map_at_100_max
value: 31.22552462503921
- type: nauc_map_at_100_std
value: 5.066442366854383
- type: nauc_map_at_10_diff1
value: 41.10544250629236
- type: nauc_map_at_10_max
value: 31.36660549991616
- type: nauc_map_at_10_std
value: 4.918474556305647
- type: nauc_map_at_1_diff1
value: 45.012348658917084
- type: nauc_map_at_1_max
value: 32.97795481425923
- type: nauc_map_at_1_std
value: 2.378119627191972
- type: nauc_map_at_20_diff1
value: 41.08979133865055
- type: nauc_map_at_20_max
value: 31.215468276820857
- type: nauc_map_at_20_std
value: 5.029740972247495
- type: nauc_map_at_3_diff1
value: 41.628234253590776
- type: nauc_map_at_3_max
value: 32.53336359524941
- type: nauc_map_at_3_std
value: 4.860348221405528
- type: nauc_map_at_5_diff1
value: 41.537709900129116
- type: nauc_map_at_5_max
value: 32.276330681668654
- type: nauc_map_at_5_std
value: 4.846181729651669
- type: nauc_mrr_at_1000_diff1
value: 42.29004874474518
- type: nauc_mrr_at_1000_max
value: 31.307199153225735
- type: nauc_mrr_at_1000_std
value: 4.605131934451417
- type: nauc_mrr_at_100_diff1
value: 42.280047109551546
- type: nauc_mrr_at_100_max
value: 31.289947735731538
- type: nauc_mrr_at_100_std
value: 4.582937582219149
- type: nauc_mrr_at_10_diff1
value: 42.34222112143596
- type: nauc_mrr_at_10_max
value: 31.359940250531142
- type: nauc_mrr_at_10_std
value: 4.453370071132275
- type: nauc_mrr_at_1_diff1
value: 45.95443881951325
- type: nauc_mrr_at_1_max
value: 32.619135528025325
- type: nauc_mrr_at_1_std
value: 2.052662449953393
- type: nauc_mrr_at_20_diff1
value: 42.26941002683479
- type: nauc_mrr_at_20_max
value: 31.187438688521034
- type: nauc_mrr_at_20_std
value: 4.5359475550655715
- type: nauc_mrr_at_3_diff1
value: 43.531839392022135
- type: nauc_mrr_at_3_max
value: 32.21473960551518
- type: nauc_mrr_at_3_std
value: 4.241677481952446
- type: nauc_mrr_at_5_diff1
value: 43.00448483997977
- type: nauc_mrr_at_5_max
value: 31.936515068920237
- type: nauc_mrr_at_5_std
value: 4.254613914320285
- type: nauc_ndcg_at_1000_diff1
value: 39.08960919974518
- type: nauc_ndcg_at_1000_max
value: 30.08930269294802
- type: nauc_ndcg_at_1000_std
value: 7.0902275178016225
- type: nauc_ndcg_at_100_diff1
value: 38.98713815279589
- type: nauc_ndcg_at_100_max
value: 29.82144804645644
- type: nauc_ndcg_at_100_std
value: 6.759601980797914
- type: nauc_ndcg_at_10_diff1
value: 39.418527591834795
- type: nauc_ndcg_at_10_max
value: 30.08055189001222
- type: nauc_ndcg_at_10_std
value: 5.721375611075414
- type: nauc_ndcg_at_1_diff1
value: 45.95443881951325
- type: nauc_ndcg_at_1_max
value: 32.619135528025325
- type: nauc_ndcg_at_1_std
value: 2.052662449953393
- type: nauc_ndcg_at_20_diff1
value: 39.05782103145853
- type: nauc_ndcg_at_20_max
value: 29.49942876513546
- type: nauc_ndcg_at_20_std
value: 6.34657136486055
- type: nauc_ndcg_at_3_diff1
value: 41.125063900984635
- type: nauc_ndcg_at_3_max
value: 32.139095393552424
- type: nauc_ndcg_at_3_std
value: 5.191262454292501
- type: nauc_ndcg_at_5_diff1
value: 40.717371213208544
- type: nauc_ndcg_at_5_max
value: 31.774089542050117
- type: nauc_ndcg_at_5_std
value: 5.223234037768828
- type: nauc_precision_at_1000_diff1
value: 0.006638310316025911
- type: nauc_precision_at_1000_max
value: -9.546883023580094
- type: nauc_precision_at_1000_std
value: -1.475622979214972
- type: nauc_precision_at_100_diff1
value: 11.010276773793507
- type: nauc_precision_at_100_max
value: -0.08180253887926077
- type: nauc_precision_at_100_std
value: 3.287046242664858
- type: nauc_precision_at_10_diff1
value: 27.262245018901698
- type: nauc_precision_at_10_max
value: 16.2877591608577
- type: nauc_precision_at_10_std
value: 5.839311010801853
- type: nauc_precision_at_1_diff1
value: 45.95443881951325
- type: nauc_precision_at_1_max
value: 32.619135528025325
- type: nauc_precision_at_1_std
value: 2.052662449953393
- type: nauc_precision_at_20_diff1
value: 22.408421524281493
- type: nauc_precision_at_20_max
value: 10.077231751543565
- type: nauc_precision_at_20_std
value: 6.236324897139737
- type: nauc_precision_at_3_diff1
value: 37.104186066630184
- type: nauc_precision_at_3_max
value: 28.93970664421486
- type: nauc_precision_at_3_std
value: 6.189175805816679
- type: nauc_precision_at_5_diff1
value: 33.481383755503344
- type: nauc_precision_at_5_max
value: 24.574152076881976
- type: nauc_precision_at_5_std
value: 5.787283838050964
- type: nauc_recall_at_1000_diff1
value: 13.749745478534466
- type: nauc_recall_at_1000_max
value: 27.46595915304242
- type: nauc_recall_at_1000_std
value: 43.337093159412746
- type: nauc_recall_at_100_diff1
value: 25.71608004026722
- type: nauc_recall_at_100_max
value: 23.295361701635084
- type: nauc_recall_at_100_std
value: 17.803464732957156
- type: nauc_recall_at_10_diff1
value: 31.44102657586473
- type: nauc_recall_at_10_max
value: 25.636789857993808
- type: nauc_recall_at_10_std
value: 8.690210156923568
- type: nauc_recall_at_1_diff1
value: 45.012348658917084
- type: nauc_recall_at_1_max
value: 32.97795481425923
- type: nauc_recall_at_1_std
value: 2.378119627191972
- type: nauc_recall_at_20_diff1
value: 29.75929214314049
- type: nauc_recall_at_20_max
value: 22.919735188320487
- type: nauc_recall_at_20_std
value: 11.567442926310765
- type: nauc_recall_at_3_diff1
value: 36.76334334420757
- type: nauc_recall_at_3_max
value: 31.59129150974883
- type: nauc_recall_at_3_std
value: 7.166175857606125
- type: nauc_recall_at_5_diff1
value: 35.13282132180025
- type: nauc_recall_at_5_max
value: 30.350684835131553
- type: nauc_recall_at_5_std
value: 7.142861662933231
- type: ndcg_at_1
value: 29.448
- type: ndcg_at_10
value: 38.574999999999996
- type: ndcg_at_100
value: 44.263999999999996
- type: ndcg_at_1000
value: 46.32
- type: ndcg_at_20
value: 40.628
- type: ndcg_at_3
value: 33.906
- type: ndcg_at_5
value: 36.03
- type: precision_at_1
value: 29.448
- type: precision_at_10
value: 6.166
- type: precision_at_100
value: 0.992
- type: precision_at_1000
value: 0.124
- type: precision_at_20
value: 3.627
- type: precision_at_3
value: 14.417
- type: precision_at_5
value: 10.184
- type: recall_at_1
value: 26.184
- type: recall_at_10
value: 50.339
- type: recall_at_100
value: 76.44300000000001
- type: recall_at_1000
value: 91.376
- type: recall_at_20
value: 57.94200000000001
- type: recall_at_3
value: 37.602000000000004
- type: recall_at_5
value: 42.708
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackTexRetrieval (default)
revision: 46989137a86843e03a6195de44b09deda022eec7
split: test
type: mteb/cqadupstack-tex
metrics:
- type: main_score
value: 30.446
- type: map_at_1
value: 16.888
- type: map_at_10
value: 25.169999999999998
- type: map_at_100
value: 26.432
- type: map_at_1000
value: 26.558
- type: map_at_20
value: 25.884
- type: map_at_3
value: 22.392
- type: map_at_5
value: 23.862
- type: mrr_at_1
value: 20.612525808671712
- type: mrr_at_10
value: 28.94379717934435
- type: mrr_at_100
value: 29.956026940991578
- type: mrr_at_1000
value: 30.030396196620284
- type: mrr_at_20
value: 29.543198182591162
- type: mrr_at_3
value: 26.34778618949303
- type: mrr_at_5
value: 27.725969259004412
- type: nauc_map_at_1000_diff1
value: 37.47997469039452
- type: nauc_map_at_1000_max
value: 21.421931473251863
- type: nauc_map_at_1000_std
value: 1.5690124401135066
- type: nauc_map_at_100_diff1
value: 37.478584664432304
- type: nauc_map_at_100_max
value: 21.422143658021053
- type: nauc_map_at_100_std
value: 1.566528536384738
- type: nauc_map_at_10_diff1
value: 37.63897487891542
- type: nauc_map_at_10_max
value: 21.320030910527255
- type: nauc_map_at_10_std
value: 1.0327570489355187
- type: nauc_map_at_1_diff1
value: 44.71163328349228
- type: nauc_map_at_1_max
value: 22.446113464782112
- type: nauc_map_at_1_std
value: -0.41970785990162957
- type: nauc_map_at_20_diff1
value: 37.60131283021686
- type: nauc_map_at_20_max
value: 21.3691373960991
- type: nauc_map_at_20_std
value: 1.2704576929639178
- type: nauc_map_at_3_diff1
value: 38.569300112130584
- type: nauc_map_at_3_max
value: 21.599281592197645
- type: nauc_map_at_3_std
value: 0.17312117243077374
- type: nauc_map_at_5_diff1
value: 38.003272593074534
- type: nauc_map_at_5_max
value: 21.470587264514265
- type: nauc_map_at_5_std
value: 0.8202467504176192
- type: nauc_mrr_at_1000_diff1
value: 36.40070606249303
- type: nauc_mrr_at_1000_max
value: 20.918159385616235
- type: nauc_mrr_at_1000_std
value: 1.4689044699534843
- type: nauc_mrr_at_100_diff1
value: 36.382723733435185
- type: nauc_mrr_at_100_max
value: 20.914130048646378
- type: nauc_mrr_at_100_std
value: 1.4695708792966349
- type: nauc_mrr_at_10_diff1
value: 36.39783865629839
- type: nauc_mrr_at_10_max
value: 20.807844052080004
- type: nauc_mrr_at_10_std
value: 1.0924977932781788
- type: nauc_mrr_at_1_diff1
value: 42.57454091873592
- type: nauc_mrr_at_1_max
value: 21.672943617832036
- type: nauc_mrr_at_1_std
value: -0.10189138615103883
- type: nauc_mrr_at_20_diff1
value: 36.3838114124106
- type: nauc_mrr_at_20_max
value: 20.87264072376547
- type: nauc_mrr_at_20_std
value: 1.3432553141494952
- type: nauc_mrr_at_3_diff1
value: 37.51571566935928
- type: nauc_mrr_at_3_max
value: 21.19647468708375
- type: nauc_mrr_at_3_std
value: 0.6277750127835567
- type: nauc_mrr_at_5_diff1
value: 36.87464282453542
- type: nauc_mrr_at_5_max
value: 21.0704963624643
- type: nauc_mrr_at_5_std
value: 0.9052912701483784
- type: nauc_ndcg_at_1000_diff1
value: 34.552555694361274
- type: nauc_ndcg_at_1000_max
value: 21.259928579786788
- type: nauc_ndcg_at_1000_std
value: 3.938486886570975
- type: nauc_ndcg_at_100_diff1
value: 34.37518593610454
- type: nauc_ndcg_at_100_max
value: 21.182389588343348
- type: nauc_ndcg_at_100_std
value: 4.3168049004409275
- type: nauc_ndcg_at_10_diff1
value: 35.211341808407504
- type: nauc_ndcg_at_10_max
value: 20.84028975529198
- type: nauc_ndcg_at_10_std
value: 1.8086338693039452
- type: nauc_ndcg_at_1_diff1
value: 42.57454091873592
- type: nauc_ndcg_at_1_max
value: 21.672943617832036
- type: nauc_ndcg_at_1_std
value: -0.10189138615103883
- type: nauc_ndcg_at_20_diff1
value: 35.00363891684754
- type: nauc_ndcg_at_20_max
value: 20.922087179049363
- type: nauc_ndcg_at_20_std
value: 2.660205273507509
- type: nauc_ndcg_at_3_diff1
value: 36.92485381743134
- type: nauc_ndcg_at_3_max
value: 21.25737761098354
- type: nauc_ndcg_at_3_std
value: 0.28798539980447146
- type: nauc_ndcg_at_5_diff1
value: 36.04502896798978
- type: nauc_ndcg_at_5_max
value: 21.148648295149318
- type: nauc_ndcg_at_5_std
value: 1.243003231031824
- type: nauc_precision_at_1000_diff1
value: -0.7759478803048101
- type: nauc_precision_at_1000_max
value: 3.2826437330805502
- type: nauc_precision_at_1000_std
value: 2.7787334076838173
- type: nauc_precision_at_100_diff1
value: 6.959433786637141
- type: nauc_precision_at_100_max
value: 10.104545782506289
- type: nauc_precision_at_100_std
value: 8.917540163713769
- type: nauc_precision_at_10_diff1
value: 22.003522151797437
- type: nauc_precision_at_10_max
value: 16.164192732980553
- type: nauc_precision_at_10_std
value: 3.275914834741683
- type: nauc_precision_at_1_diff1
value: 42.57454091873592
- type: nauc_precision_at_1_max
value: 21.672943617832036
- type: nauc_precision_at_1_std
value: -0.10189138615103883
- type: nauc_precision_at_20_diff1
value: 18.129059379732563
- type: nauc_precision_at_20_max
value: 14.512665907788747
- type: nauc_precision_at_20_std
value: 5.022877954638016
- type: nauc_precision_at_3_diff1
value: 29.98093015706584
- type: nauc_precision_at_3_max
value: 19.728491902142636
- type: nauc_precision_at_3_std
value: 1.4470534167918057
- type: nauc_precision_at_5_diff1
value: 26.50099880522309
- type: nauc_precision_at_5_max
value: 18.138610189869738
- type: nauc_precision_at_5_std
value: 2.551091667929808
- type: nauc_recall_at_1000_diff1
value: 16.96943824149726
- type: nauc_recall_at_1000_max
value: 23.257191427293964
- type: nauc_recall_at_1000_std
value: 24.9502432707826
- type: nauc_recall_at_100_diff1
value: 21.669754477643142
- type: nauc_recall_at_100_max
value: 19.164964731074388
- type: nauc_recall_at_100_std
value: 16.85249185076977
- type: nauc_recall_at_10_diff1
value: 27.551237362397828
- type: nauc_recall_at_10_max
value: 18.28543172320463
- type: nauc_recall_at_10_std
value: 3.5306584526336846
- type: nauc_recall_at_1_diff1
value: 44.71163328349228
- type: nauc_recall_at_1_max
value: 22.446113464782112
- type: nauc_recall_at_1_std
value: -0.41970785990162957
- type: nauc_recall_at_20_diff1
value: 26.271222471772326
- type: nauc_recall_at_20_max
value: 18.12240775027493
- type: nauc_recall_at_20_std
value: 6.607853337331698
- type: nauc_recall_at_3_diff1
value: 32.25185781878737
- type: nauc_recall_at_3_max
value: 20.129371018198135
- type: nauc_recall_at_3_std
value: 0.44779691255305437
- type: nauc_recall_at_5_diff1
value: 29.921019600841547
- type: nauc_recall_at_5_max
value: 19.573769036363174
- type: nauc_recall_at_5_std
value: 2.3711269481227277
- type: ndcg_at_1
value: 20.613
- type: ndcg_at_10
value: 30.446
- type: ndcg_at_100
value: 36.296
- type: ndcg_at_1000
value: 39.062999999999995
- type: ndcg_at_20
value: 32.756
- type: ndcg_at_3
value: 25.413000000000004
- type: ndcg_at_5
value: 27.61
- type: precision_at_1
value: 20.613
- type: precision_at_10
value: 5.785
- type: precision_at_100
value: 1.013
- type: precision_at_1000
value: 0.14400000000000002
- type: precision_at_20
value: 3.567
- type: precision_at_3
value: 12.216000000000001
- type: precision_at_5
value: 9.030000000000001
- type: recall_at_1
value: 16.888
- type: recall_at_10
value: 42.64
- type: recall_at_100
value: 68.771
- type: recall_at_1000
value: 88.018
- type: recall_at_20
value: 51.121
- type: recall_at_3
value: 28.505000000000003
- type: recall_at_5
value: 34.099000000000004
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackUnixRetrieval (default)
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
split: test
type: mteb/cqadupstack-unix
metrics:
- type: main_score
value: 42.283
- type: map_at_1
value: 26.326
- type: map_at_10
value: 36.515
- type: map_at_100
value: 37.832
- type: map_at_1000
value: 37.937
- type: map_at_20
value: 37.269999999999996
- type: map_at_3
value: 33.338
- type: map_at_5
value: 35.169
- type: mrr_at_1
value: 31.25
- type: mrr_at_10
value: 40.64062129827054
- type: mrr_at_100
value: 41.61370050636776
- type: mrr_at_1000
value: 41.67017916648837
- type: mrr_at_20
value: 41.18797257145757
- type: mrr_at_3
value: 37.93532338308455
- type: mrr_at_5
value: 39.55845771144274
- type: nauc_map_at_1000_diff1
value: 50.56683743231891
- type: nauc_map_at_1000_max
value: 39.19969051920628
- type: nauc_map_at_1000_std
value: -0.9554342712910933
- type: nauc_map_at_100_diff1
value: 50.544855645079345
- type: nauc_map_at_100_max
value: 39.191632786859124
- type: nauc_map_at_100_std
value: -0.962267377904227
- type: nauc_map_at_10_diff1
value: 50.74377893641161
- type: nauc_map_at_10_max
value: 39.19344096202928
- type: nauc_map_at_10_std
value: -1.22618961567281
- type: nauc_map_at_1_diff1
value: 57.492804899336434
- type: nauc_map_at_1_max
value: 38.451829796562365
- type: nauc_map_at_1_std
value: -2.2223809957991993
- type: nauc_map_at_20_diff1
value: 50.37379315470132
- type: nauc_map_at_20_max
value: 39.15268041299702
- type: nauc_map_at_20_std
value: -1.1542173582571251
- type: nauc_map_at_3_diff1
value: 52.114315032062265
- type: nauc_map_at_3_max
value: 39.506520142355846
- type: nauc_map_at_3_std
value: -1.136869114727129
- type: nauc_map_at_5_diff1
value: 51.137878020043615
- type: nauc_map_at_5_max
value: 39.41597927774479
- type: nauc_map_at_5_std
value: -1.414373986733375
- type: nauc_mrr_at_1000_diff1
value: 49.28345924937687
- type: nauc_mrr_at_1000_max
value: 39.49024565022835
- type: nauc_mrr_at_1000_std
value: -0.7389778084722739
- type: nauc_mrr_at_100_diff1
value: 49.25964062379304
- type: nauc_mrr_at_100_max
value: 39.49625691927597
- type: nauc_mrr_at_100_std
value: -0.7233812120562104
- type: nauc_mrr_at_10_diff1
value: 49.28005195010669
- type: nauc_mrr_at_10_max
value: 39.502594291827194
- type: nauc_mrr_at_10_std
value: -0.854578965146599
- type: nauc_mrr_at_1_diff1
value: 54.51968972219606
- type: nauc_mrr_at_1_max
value: 38.985521654330725
- type: nauc_mrr_at_1_std
value: -3.17796307755014
- type: nauc_mrr_at_20_diff1
value: 49.140932871712586
- type: nauc_mrr_at_20_max
value: 39.44307540677674
- type: nauc_mrr_at_20_std
value: -0.8396065147276742
- type: nauc_mrr_at_3_diff1
value: 50.04344397525612
- type: nauc_mrr_at_3_max
value: 39.56654196970236
- type: nauc_mrr_at_3_std
value: -1.2528287637913136
- type: nauc_mrr_at_5_diff1
value: 49.489373600446605
- type: nauc_mrr_at_5_max
value: 39.659057230991316
- type: nauc_mrr_at_5_std
value: -0.8720012571429344
- type: nauc_ndcg_at_1000_diff1
value: 48.748836050761405
- type: nauc_ndcg_at_1000_max
value: 39.3457622357591
- type: nauc_ndcg_at_1000_std
value: 1.1002389454170685
- type: nauc_ndcg_at_100_diff1
value: 48.22509167328338
- type: nauc_ndcg_at_100_max
value: 39.3256932518086
- type: nauc_ndcg_at_100_std
value: 1.438492059971218
- type: nauc_ndcg_at_10_diff1
value: 48.523357452437814
- type: nauc_ndcg_at_10_max
value: 39.34471711241775
- type: nauc_ndcg_at_10_std
value: -0.2137972110670513
- type: nauc_ndcg_at_1_diff1
value: 54.51968972219606
- type: nauc_ndcg_at_1_max
value: 38.985521654330725
- type: nauc_ndcg_at_1_std
value: -3.17796307755014
- type: nauc_ndcg_at_20_diff1
value: 47.51869995272205
- type: nauc_ndcg_at_20_max
value: 39.30246710982855
- type: nauc_ndcg_at_20_std
value: 0.1356281374446824
- type: nauc_ndcg_at_3_diff1
value: 50.12867016794126
- type: nauc_ndcg_at_3_max
value: 39.4353732876648
- type: nauc_ndcg_at_3_std
value: -0.9234551014485096
- type: nauc_ndcg_at_5_diff1
value: 49.10482448457108
- type: nauc_ndcg_at_5_max
value: 39.604661308610275
- type: nauc_ndcg_at_5_std
value: -0.7590788407730459
- type: nauc_precision_at_1000_diff1
value: -13.992133335670959
- type: nauc_precision_at_1000_max
value: -7.214390627220537
- type: nauc_precision_at_1000_std
value: 1.639261412748335
- type: nauc_precision_at_100_diff1
value: -0.557128351079009
- type: nauc_precision_at_100_max
value: 7.486849612096312
- type: nauc_precision_at_100_std
value: 7.1810501898680394
- type: nauc_precision_at_10_diff1
value: 21.213914544802844
- type: nauc_precision_at_10_max
value: 25.864858450310546
- type: nauc_precision_at_10_std
value: 0.39125389546740813
- type: nauc_precision_at_1_diff1
value: 54.51968972219606
- type: nauc_precision_at_1_max
value: 38.985521654330725
- type: nauc_precision_at_1_std
value: -3.17796307755014
- type: nauc_precision_at_20_diff1
value: 11.601304405847157
- type: nauc_precision_at_20_max
value: 20.185407711622904
- type: nauc_precision_at_20_std
value: 2.1916426458779488
- type: nauc_precision_at_3_diff1
value: 36.89740060012004
- type: nauc_precision_at_3_max
value: 35.568914734056975
- type: nauc_precision_at_3_std
value: 0.038850738796324405
- type: nauc_precision_at_5_diff1
value: 29.183999992678782
- type: nauc_precision_at_5_max
value: 31.72969928353064
- type: nauc_precision_at_5_std
value: -0.5629836594620032
- type: nauc_recall_at_1000_diff1
value: 37.261390414310384
- type: nauc_recall_at_1000_max
value: 34.923735354550324
- type: nauc_recall_at_1000_std
value: 45.97695232902403
- type: nauc_recall_at_100_diff1
value: 35.67925434563207
- type: nauc_recall_at_100_max
value: 35.26178579038922
- type: nauc_recall_at_100_std
value: 17.131274487036695
- type: nauc_recall_at_10_diff1
value: 40.90067655059736
- type: nauc_recall_at_10_max
value: 36.79952710248241
- type: nauc_recall_at_10_std
value: 2.716241775569224
- type: nauc_recall_at_1_diff1
value: 57.492804899336434
- type: nauc_recall_at_1_max
value: 38.451829796562365
- type: nauc_recall_at_1_std
value: -2.2223809957991993
- type: nauc_recall_at_20_diff1
value: 36.08583461458776
- type: nauc_recall_at_20_max
value: 36.62990105037789
- type: nauc_recall_at_20_std
value: 4.337305167037863
- type: nauc_recall_at_3_diff1
value: 46.41673012651659
- type: nauc_recall_at_3_max
value: 38.842854844453505
- type: nauc_recall_at_3_std
value: 0.8460605171745147
- type: nauc_recall_at_5_diff1
value: 43.29735456270288
- type: nauc_recall_at_5_max
value: 38.51958912080913
- type: nauc_recall_at_5_std
value: 1.1156101097663538
- type: ndcg_at_1
value: 31.25
- type: ndcg_at_10
value: 42.283
- type: ndcg_at_100
value: 48.067
- type: ndcg_at_1000
value: 50.246
- type: ndcg_at_20
value: 44.644
- type: ndcg_at_3
value: 36.858000000000004
- type: ndcg_at_5
value: 39.516
- type: precision_at_1
value: 31.25
- type: precision_at_10
value: 7.369000000000001
- type: precision_at_100
value: 1.137
- type: precision_at_1000
value: 0.14400000000000002
- type: precision_at_20
value: 4.328
- type: precision_at_3
value: 17.071
- type: precision_at_5
value: 12.257
- type: recall_at_1
value: 26.326
- type: recall_at_10
value: 55.689
- type: recall_at_100
value: 80.60000000000001
- type: recall_at_1000
value: 95.33500000000001
- type: recall_at_20
value: 64.229
- type: recall_at_3
value: 40.836
- type: recall_at_5
value: 47.577000000000005
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackWebmastersRetrieval (default)
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
split: test
type: mteb/cqadupstack-webmasters
metrics:
- type: main_score
value: 42.841
- type: map_at_1
value: 26.180999999999997
- type: map_at_10
value: 36.370000000000005
- type: map_at_100
value: 38.143
- type: map_at_1000
value: 38.344
- type: map_at_20
value: 37.333
- type: map_at_3
value: 33.061
- type: map_at_5
value: 34.776
- type: mrr_at_1
value: 31.422924901185773
- type: mrr_at_10
value: 40.798513081121804
- type: mrr_at_100
value: 41.768341726195324
- type: mrr_at_1000
value: 41.814384046688836
- type: mrr_at_20
value: 41.39358410960642
- type: mrr_at_3
value: 37.91172595520424
- type: mrr_at_5
value: 39.39393939393942
- type: nauc_map_at_1000_diff1
value: 54.03266808787207
- type: nauc_map_at_1000_max
value: 27.336058137590257
- type: nauc_map_at_1000_std
value: 7.929483408524749
- type: nauc_map_at_100_diff1
value: 54.07605617892997
- type: nauc_map_at_100_max
value: 27.420933517205988
- type: nauc_map_at_100_std
value: 7.654364598186232
- type: nauc_map_at_10_diff1
value: 54.49638487392654
- type: nauc_map_at_10_max
value: 26.95803941812555
- type: nauc_map_at_10_std
value: 6.0656481626678955
- type: nauc_map_at_1_diff1
value: 60.62163331093275
- type: nauc_map_at_1_max
value: 30.354161137182604
- type: nauc_map_at_1_std
value: 3.283563596176243
- type: nauc_map_at_20_diff1
value: 54.2171414323596
- type: nauc_map_at_20_max
value: 27.284531333468713
- type: nauc_map_at_20_std
value: 6.98275284446578
- type: nauc_map_at_3_diff1
value: 55.48999072237882
- type: nauc_map_at_3_max
value: 27.87434380647368
- type: nauc_map_at_3_std
value: 5.868275382905556
- type: nauc_map_at_5_diff1
value: 54.84718663927504
- type: nauc_map_at_5_max
value: 26.76192258450303
- type: nauc_map_at_5_std
value: 4.739255945404961
- type: nauc_mrr_at_1000_diff1
value: 53.90866989000705
- type: nauc_mrr_at_1000_max
value: 28.600059918390247
- type: nauc_mrr_at_1000_std
value: 9.096507718338657
- type: nauc_mrr_at_100_diff1
value: 53.902988075226396
- type: nauc_mrr_at_100_max
value: 28.599830953942174
- type: nauc_mrr_at_100_std
value: 9.106284426792636
- type: nauc_mrr_at_10_diff1
value: 53.80346272826417
- type: nauc_mrr_at_10_max
value: 28.281963295521706
- type: nauc_mrr_at_10_std
value: 8.759210459852863
- type: nauc_mrr_at_1_diff1
value: 60.080144505628354
- type: nauc_mrr_at_1_max
value: 33.74016395865226
- type: nauc_mrr_at_1_std
value: 7.6714142708021305
- type: nauc_mrr_at_20_diff1
value: 53.760177497884406
- type: nauc_mrr_at_20_max
value: 28.463215939799813
- type: nauc_mrr_at_20_std
value: 9.068314971833093
- type: nauc_mrr_at_3_diff1
value: 54.41179314982579
- type: nauc_mrr_at_3_max
value: 29.01231966941189
- type: nauc_mrr_at_3_std
value: 9.383760609453352
- type: nauc_mrr_at_5_diff1
value: 54.261154767714515
- type: nauc_mrr_at_5_max
value: 28.187796326709314
- type: nauc_mrr_at_5_std
value: 8.324984381963386
- type: nauc_ndcg_at_1000_diff1
value: 52.16756830119805
- type: nauc_ndcg_at_1000_max
value: 27.47333072396369
- type: nauc_ndcg_at_1000_std
value: 10.433977027658207
- type: nauc_ndcg_at_100_diff1
value: 51.67893475997602
- type: nauc_ndcg_at_100_max
value: 27.364432612842776
- type: nauc_ndcg_at_100_std
value: 10.418878470827911
- type: nauc_ndcg_at_10_diff1
value: 51.455066768364546
- type: nauc_ndcg_at_10_max
value: 24.86204769904609
- type: nauc_ndcg_at_10_std
value: 7.975685972633213
- type: nauc_ndcg_at_1_diff1
value: 60.080144505628354
- type: nauc_ndcg_at_1_max
value: 33.74016395865226
- type: nauc_ndcg_at_1_std
value: 7.6714142708021305
- type: nauc_ndcg_at_20_diff1
value: 51.135229230296154
- type: nauc_ndcg_at_20_max
value: 25.718284057364894
- type: nauc_ndcg_at_20_std
value: 9.289363271312794
- type: nauc_ndcg_at_3_diff1
value: 52.70782059846899
- type: nauc_ndcg_at_3_max
value: 27.470104306225863
- type: nauc_ndcg_at_3_std
value: 8.98582220953654
- type: nauc_ndcg_at_5_diff1
value: 52.13622381467935
- type: nauc_ndcg_at_5_max
value: 25.012072634464516
- type: nauc_ndcg_at_5_std
value: 6.400559275913626
- type: nauc_precision_at_1000_diff1
value: -8.068455064670975
- type: nauc_precision_at_1000_max
value: -10.387599717496192
- type: nauc_precision_at_1000_std
value: 29.28771717137362
- type: nauc_precision_at_100_diff1
value: -4.542486688876828
- type: nauc_precision_at_100_max
value: 2.2213727010948805
- type: nauc_precision_at_100_std
value: 28.27046916836265
- type: nauc_precision_at_10_diff1
value: 19.415176505821286
- type: nauc_precision_at_10_max
value: 13.444503991503346
- type: nauc_precision_at_10_std
value: 16.810075843089322
- type: nauc_precision_at_1_diff1
value: 60.080144505628354
- type: nauc_precision_at_1_max
value: 33.74016395865226
- type: nauc_precision_at_1_std
value: 7.6714142708021305
- type: nauc_precision_at_20_diff1
value: 7.891942509311732
- type: nauc_precision_at_20_max
value: 9.684197810455526
- type: nauc_precision_at_20_std
value: 22.88953757757932
- type: nauc_precision_at_3_diff1
value: 37.07359628126754
- type: nauc_precision_at_3_max
value: 23.182518856006016
- type: nauc_precision_at_3_std
value: 15.043709459451618
- type: nauc_precision_at_5_diff1
value: 30.603525439923317
- type: nauc_precision_at_5_max
value: 17.887460487183446
- type: nauc_precision_at_5_std
value: 10.354003595459048
- type: nauc_recall_at_1000_diff1
value: 37.24937924148794
- type: nauc_recall_at_1000_max
value: 27.116312668851744
- type: nauc_recall_at_1000_std
value: 53.85172781866263
- type: nauc_recall_at_100_diff1
value: 36.95341517350607
- type: nauc_recall_at_100_max
value: 26.388323872148362
- type: nauc_recall_at_100_std
value: 25.552378739251036
- type: nauc_recall_at_10_diff1
value: 40.71842158421213
- type: nauc_recall_at_10_max
value: 16.378208729794906
- type: nauc_recall_at_10_std
value: 7.038163226525162
- type: nauc_recall_at_1_diff1
value: 60.62163331093275
- type: nauc_recall_at_1_max
value: 30.354161137182604
- type: nauc_recall_at_1_std
value: 3.283563596176243
- type: nauc_recall_at_20_diff1
value: 37.67229343743934
- type: nauc_recall_at_20_max
value: 19.09861858622759
- type: nauc_recall_at_20_std
value: 12.498129510164299
- type: nauc_recall_at_3_diff1
value: 47.38926382155088
- type: nauc_recall_at_3_max
value: 21.835926284104218
- type: nauc_recall_at_3_std
value: 6.956536082796651
- type: nauc_recall_at_5_diff1
value: 44.52691027171522
- type: nauc_recall_at_5_max
value: 16.60678467044489
- type: nauc_recall_at_5_std
value: 2.751824192702687
- type: ndcg_at_1
value: 31.423000000000002
- type: ndcg_at_10
value: 42.841
- type: ndcg_at_100
value: 49.003
- type: ndcg_at_1000
value: 51.117999999999995
- type: ndcg_at_20
value: 45.273
- type: ndcg_at_3
value: 37.469
- type: ndcg_at_5
value: 39.841
- type: precision_at_1
value: 31.423000000000002
- type: precision_at_10
value: 8.419
- type: precision_at_100
value: 1.638
- type: precision_at_1000
value: 0.243
- type: precision_at_20
value: 5.375
- type: precision_at_3
value: 17.852
- type: precision_at_5
value: 12.964
- type: recall_at_1
value: 26.180999999999997
- type: recall_at_10
value: 55.564
- type: recall_at_100
value: 83.22500000000001
- type: recall_at_1000
value: 96.124
- type: recall_at_20
value: 64.68199999999999
- type: recall_at_3
value: 40.28
- type: recall_at_5
value: 46.535
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackWordpressRetrieval (default)
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
split: test
type: mteb/cqadupstack-wordpress
metrics:
- type: main_score
value: 35.555
- type: map_at_1
value: 22.286
- type: map_at_10
value: 30.61
- type: map_at_100
value: 31.619999999999997
- type: map_at_1000
value: 31.724999999999998
- type: map_at_20
value: 31.092
- type: map_at_3
value: 27.962999999999997
- type: map_at_5
value: 29.383
- type: mrr_at_1
value: 24.399260628465804
- type: mrr_at_10
value: 32.696065487193025
- type: mrr_at_100
value: 33.58070054530842
- type: mrr_at_1000
value: 33.654777325405774
- type: mrr_at_20
value: 33.11457792065027
- type: mrr_at_3
value: 30.31423290203327
- type: mrr_at_5
value: 31.552680221811464
- type: nauc_map_at_1000_diff1
value: 39.79013314997585
- type: nauc_map_at_1000_max
value: 22.27888746840121
- type: nauc_map_at_1000_std
value: 4.539318966935818
- type: nauc_map_at_100_diff1
value: 39.776949099430524
- type: nauc_map_at_100_max
value: 22.23994720281935
- type: nauc_map_at_100_std
value: 4.532364090321059
- type: nauc_map_at_10_diff1
value: 40.214668109177545
- type: nauc_map_at_10_max
value: 22.310555615555764
- type: nauc_map_at_10_std
value: 3.7456514343350205
- type: nauc_map_at_1_diff1
value: 46.18393596000586
- type: nauc_map_at_1_max
value: 24.547598678024556
- type: nauc_map_at_1_std
value: 6.0574958769530465
- type: nauc_map_at_20_diff1
value: 39.95037327455914
- type: nauc_map_at_20_max
value: 22.369335761495186
- type: nauc_map_at_20_std
value: 4.248871676377463
- type: nauc_map_at_3_diff1
value: 40.39031064698702
- type: nauc_map_at_3_max
value: 22.036440129422672
- type: nauc_map_at_3_std
value: 2.4849784793648846
- type: nauc_map_at_5_diff1
value: 40.55531780202422
- type: nauc_map_at_5_max
value: 22.34099868910038
- type: nauc_map_at_5_std
value: 3.989437759311683
- type: nauc_mrr_at_1000_diff1
value: 38.22864890501086
- type: nauc_mrr_at_1000_max
value: 22.196145770688915
- type: nauc_mrr_at_1000_std
value: 6.366087881052758
- type: nauc_mrr_at_100_diff1
value: 38.19684329027937
- type: nauc_mrr_at_100_max
value: 22.17259263583887
- type: nauc_mrr_at_100_std
value: 6.3579191826046895
- type: nauc_mrr_at_10_diff1
value: 38.50520505165495
- type: nauc_mrr_at_10_max
value: 22.14145550999763
- type: nauc_mrr_at_10_std
value: 5.87670477461074
- type: nauc_mrr_at_1_diff1
value: 43.580238226066754
- type: nauc_mrr_at_1_max
value: 25.37631028483947
- type: nauc_mrr_at_1_std
value: 8.27700367711168
- type: nauc_mrr_at_20_diff1
value: 38.301149084550985
- type: nauc_mrr_at_20_max
value: 22.237002751026584
- type: nauc_mrr_at_20_std
value: 6.157632931853065
- type: nauc_mrr_at_3_diff1
value: 38.40064989443
- type: nauc_mrr_at_3_max
value: 22.300592015957253
- type: nauc_mrr_at_3_std
value: 5.111142119521902
- type: nauc_mrr_at_5_diff1
value: 38.74181914377854
- type: nauc_mrr_at_5_max
value: 22.25441111952184
- type: nauc_mrr_at_5_std
value: 6.22876437673998
- type: nauc_ndcg_at_1000_diff1
value: 36.69736142976795
- type: nauc_ndcg_at_1000_max
value: 21.867116284783787
- type: nauc_ndcg_at_1000_std
value: 7.265926771096148
- type: nauc_ndcg_at_100_diff1
value: 36.09322471126019
- type: nauc_ndcg_at_100_max
value: 21.11550289992875
- type: nauc_ndcg_at_100_std
value: 7.040857596769399
- type: nauc_ndcg_at_10_diff1
value: 38.066185877266406
- type: nauc_ndcg_at_10_max
value: 21.406313151333396
- type: nauc_ndcg_at_10_std
value: 3.714388060329858
- type: nauc_ndcg_at_1_diff1
value: 43.580238226066754
- type: nauc_ndcg_at_1_max
value: 25.37631028483947
- type: nauc_ndcg_at_1_std
value: 8.27700367711168
- type: nauc_ndcg_at_20_diff1
value: 37.176737325196655
- type: nauc_ndcg_at_20_max
value: 21.605872861888944
- type: nauc_ndcg_at_20_std
value: 5.139273672061484
- type: nauc_ndcg_at_3_diff1
value: 37.99865829973418
- type: nauc_ndcg_at_3_max
value: 21.628352451265933
- type: nauc_ndcg_at_3_std
value: 2.5403484884659906
- type: nauc_ndcg_at_5_diff1
value: 38.68827688198417
- type: nauc_ndcg_at_5_max
value: 21.766119634697375
- type: nauc_ndcg_at_5_std
value: 4.663477639905768
- type: nauc_precision_at_1000_diff1
value: -24.32404164272638
- type: nauc_precision_at_1000_max
value: -0.1920006879032294
- type: nauc_precision_at_1000_std
value: 3.8459453302163835
- type: nauc_precision_at_100_diff1
value: 0.0961190193116701
- type: nauc_precision_at_100_max
value: 10.432470527841613
- type: nauc_precision_at_100_std
value: 19.51298317615412
- type: nauc_precision_at_10_diff1
value: 24.865309916077123
- type: nauc_precision_at_10_max
value: 19.106193444839885
- type: nauc_precision_at_10_std
value: 6.1319125503229985
- type: nauc_precision_at_1_diff1
value: 43.580238226066754
- type: nauc_precision_at_1_max
value: 25.37631028483947
- type: nauc_precision_at_1_std
value: 8.27700367711168
- type: nauc_precision_at_20_diff1
value: 17.152528821707108
- type: nauc_precision_at_20_max
value: 18.550074587326083
- type: nauc_precision_at_20_std
value: 12.414087853840773
- type: nauc_precision_at_3_diff1
value: 29.793753328467677
- type: nauc_precision_at_3_max
value: 18.856628740486958
- type: nauc_precision_at_3_std
value: 1.7490040552720874
- type: nauc_precision_at_5_diff1
value: 27.95189102052665
- type: nauc_precision_at_5_max
value: 20.089236844488443
- type: nauc_precision_at_5_std
value: 8.272526795799227
- type: nauc_recall_at_1000_diff1
value: 13.869138770344335
- type: nauc_recall_at_1000_max
value: 25.76264057259768
- type: nauc_recall_at_1000_std
value: 42.620945012763244
- type: nauc_recall_at_100_diff1
value: 18.954723626828734
- type: nauc_recall_at_100_max
value: 15.591123917397793
- type: nauc_recall_at_100_std
value: 18.872204747720037
- type: nauc_recall_at_10_diff1
value: 32.50173111514971
- type: nauc_recall_at_10_max
value: 18.335922588632688
- type: nauc_recall_at_10_std
value: 1.6231924423632595
- type: nauc_recall_at_1_diff1
value: 46.18393596000586
- type: nauc_recall_at_1_max
value: 24.547598678024556
- type: nauc_recall_at_1_std
value: 6.0574958769530465
- type: nauc_recall_at_20_diff1
value: 29.101695015438395
- type: nauc_recall_at_20_max
value: 18.63912055487345
- type: nauc_recall_at_20_std
value: 6.064314698688468
- type: nauc_recall_at_3_diff1
value: 33.83121888715772
- type: nauc_recall_at_3_max
value: 19.258419406401263
- type: nauc_recall_at_3_std
value: -0.9541791506796478
- type: nauc_recall_at_5_diff1
value: 34.75197279898117
- type: nauc_recall_at_5_max
value: 19.704512261533242
- type: nauc_recall_at_5_std
value: 4.482729218598009
- type: ndcg_at_1
value: 24.399
- type: ndcg_at_10
value: 35.555
- type: ndcg_at_100
value: 40.723
- type: ndcg_at_1000
value: 43.155
- type: ndcg_at_20
value: 37.141999999999996
- type: ndcg_at_3
value: 30.45
- type: ndcg_at_5
value: 32.749
- type: precision_at_1
value: 24.399
- type: precision_at_10
value: 5.601
- type: precision_at_100
value: 0.8909999999999999
- type: precision_at_1000
value: 0.121
- type: precision_at_20
value: 3.216
- type: precision_at_3
value: 12.939
- type: precision_at_5
value: 9.168
- type: recall_at_1
value: 22.286
- type: recall_at_10
value: 48.925000000000004
- type: recall_at_100
value: 72.791
- type: recall_at_1000
value: 90.69
- type: recall_at_20
value: 54.649
- type: recall_at_3
value: 35.022
- type: recall_at_5
value: 40.579
task:
type: Retrieval
- dataset:
config: default
name: MTEB ClimateFEVER (default)
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
split: test
type: mteb/climate-fever
metrics:
- type: main_score
value: 41.177
- type: map_at_1
value: 17.376
- type: map_at_10
value: 30.705
- type: map_at_100
value: 33.145
- type: map_at_1000
value: 33.324
- type: map_at_20
value: 32.129999999999995
- type: map_at_3
value: 25.352000000000004
- type: map_at_5
value: 28.23
- type: mrr_at_1
value: 41.56351791530945
- type: mrr_at_10
value: 53.54648156765427
- type: mrr_at_100
value: 54.16634559859816
- type: mrr_at_1000
value: 54.18645395732765
- type: mrr_at_20
value: 53.9378827028433
- type: mrr_at_3
value: 50.228013029316024
- type: mrr_at_5
value: 52.172638436482174
- type: nauc_map_at_1000_diff1
value: 6.876082348162679
- type: nauc_map_at_1000_max
value: 21.009396900572714
- type: nauc_map_at_1000_std
value: 14.066430895753937
- type: nauc_map_at_100_diff1
value: 6.850348439065698
- type: nauc_map_at_100_max
value: 21.00364553924676
- type: nauc_map_at_100_std
value: 14.059870647686076
- type: nauc_map_at_10_diff1
value: 6.509819157572225
- type: nauc_map_at_10_max
value: 20.8065690550504
- type: nauc_map_at_10_std
value: 12.562768638969086
- type: nauc_map_at_1_diff1
value: 19.113985692043915
- type: nauc_map_at_1_max
value: 27.403489479561337
- type: nauc_map_at_1_std
value: 8.997280354530837
- type: nauc_map_at_20_diff1
value: 6.689209935271891
- type: nauc_map_at_20_max
value: 20.829284453048967
- type: nauc_map_at_20_std
value: 13.537098219731128
- type: nauc_map_at_3_diff1
value: 8.354071849010772
- type: nauc_map_at_3_max
value: 21.39794707841315
- type: nauc_map_at_3_std
value: 10.16825293444317
- type: nauc_map_at_5_diff1
value: 6.353792564160103
- type: nauc_map_at_5_max
value: 20.654610018600735
- type: nauc_map_at_5_std
value: 11.51720666348388
- type: nauc_mrr_at_1000_diff1
value: 18.719626503061086
- type: nauc_mrr_at_1000_max
value: 25.382297708144915
- type: nauc_mrr_at_1000_std
value: 20.795619918235513
- type: nauc_mrr_at_100_diff1
value: 18.707844253612848
- type: nauc_mrr_at_100_max
value: 25.37308894691589
- type: nauc_mrr_at_100_std
value: 20.792369663110737
- type: nauc_mrr_at_10_diff1
value: 18.599552029091104
- type: nauc_mrr_at_10_max
value: 25.27052175696751
- type: nauc_mrr_at_10_std
value: 20.780213374556904
- type: nauc_mrr_at_1_diff1
value: 24.986463675582733
- type: nauc_mrr_at_1_max
value: 28.633615906622467
- type: nauc_mrr_at_1_std
value: 18.935003813583457
- type: nauc_mrr_at_20_diff1
value: 18.58009831602654
- type: nauc_mrr_at_20_max
value: 25.309342060502825
- type: nauc_mrr_at_20_std
value: 20.811933813239104
- type: nauc_mrr_at_3_diff1
value: 19.03652325617102
- type: nauc_mrr_at_3_max
value: 25.590424434633995
- type: nauc_mrr_at_3_std
value: 20.672321139371263
- type: nauc_mrr_at_5_diff1
value: 18.62484399593036
- type: nauc_mrr_at_5_max
value: 25.69914791020157
- type: nauc_mrr_at_5_std
value: 20.85655370414309
- type: nauc_ndcg_at_1000_diff1
value: 8.54695673292356
- type: nauc_ndcg_at_1000_max
value: 20.965191922952513
- type: nauc_ndcg_at_1000_std
value: 18.638066252011978
- type: nauc_ndcg_at_100_diff1
value: 8.031774449316728
- type: nauc_ndcg_at_100_max
value: 21.075278652222494
- type: nauc_ndcg_at_100_std
value: 19.202919369605972
- type: nauc_ndcg_at_10_diff1
value: 6.857083069946808
- type: nauc_ndcg_at_10_max
value: 20.253829678610604
- type: nauc_ndcg_at_10_std
value: 15.456896398668595
- type: nauc_ndcg_at_1_diff1
value: 24.986463675582733
- type: nauc_ndcg_at_1_max
value: 28.633615906622467
- type: nauc_ndcg_at_1_std
value: 18.935003813583457
- type: nauc_ndcg_at_20_diff1
value: 7.310618350530157
- type: nauc_ndcg_at_20_max
value: 20.48058063251671
- type: nauc_ndcg_at_20_std
value: 17.35126095861103
- type: nauc_ndcg_at_3_diff1
value: 10.284697710828992
- type: nauc_ndcg_at_3_max
value: 21.404564460904535
- type: nauc_ndcg_at_3_std
value: 13.811528596529799
- type: nauc_ndcg_at_5_diff1
value: 6.932072809009071
- type: nauc_ndcg_at_5_max
value: 20.648949990060657
- type: nauc_ndcg_at_5_std
value: 14.368751919376846
- type: nauc_precision_at_1000_diff1
value: -1.4140589422343832
- type: nauc_precision_at_1000_max
value: -6.6374826556613264
- type: nauc_precision_at_1000_std
value: 11.116149167404775
- type: nauc_precision_at_100_diff1
value: -0.5816105386152639
- type: nauc_precision_at_100_max
value: 1.2367532155168361
- type: nauc_precision_at_100_std
value: 20.01762008226351
- type: nauc_precision_at_10_diff1
value: -1.8634971794747164
- type: nauc_precision_at_10_max
value: 6.8960226644416185
- type: nauc_precision_at_10_std
value: 17.20121919885631
- type: nauc_precision_at_1_diff1
value: 24.986463675582733
- type: nauc_precision_at_1_max
value: 28.633615906622467
- type: nauc_precision_at_1_std
value: 18.935003813583457
- type: nauc_precision_at_20_diff1
value: -1.4459597575880887
- type: nauc_precision_at_20_max
value: 5.307806932575533
- type: nauc_precision_at_20_std
value: 19.451800377499655
- type: nauc_precision_at_3_diff1
value: 4.236106307523834
- type: nauc_precision_at_3_max
value: 14.046883704229765
- type: nauc_precision_at_3_std
value: 17.800580068504328
- type: nauc_precision_at_5_diff1
value: -1.3650327582096584
- type: nauc_precision_at_5_max
value: 10.207588037756324
- type: nauc_precision_at_5_std
value: 17.342725667697678
- type: nauc_recall_at_1000_diff1
value: -0.3456913485138751
- type: nauc_recall_at_1000_max
value: 9.035999568091443
- type: nauc_recall_at_1000_std
value: 24.89435133186522
- type: nauc_recall_at_100_diff1
value: -0.4515116177152527
- type: nauc_recall_at_100_max
value: 13.308695449140274
- type: nauc_recall_at_100_std
value: 24.08184104676165
- type: nauc_recall_at_10_diff1
value: -1.7208221232376235
- type: nauc_recall_at_10_max
value: 13.184289213175079
- type: nauc_recall_at_10_std
value: 12.654581726678604
- type: nauc_recall_at_1_diff1
value: 19.113985692043915
- type: nauc_recall_at_1_max
value: 27.403489479561337
- type: nauc_recall_at_1_std
value: 8.997280354530837
- type: nauc_recall_at_20_diff1
value: -0.7023429202973307
- type: nauc_recall_at_20_max
value: 12.830137977471596
- type: nauc_recall_at_20_std
value: 16.37670340447336
- type: nauc_recall_at_3_diff1
value: 2.8972253611264143
- type: nauc_recall_at_3_max
value: 16.779165952414292
- type: nauc_recall_at_3_std
value: 9.121837904207856
- type: nauc_recall_at_5_diff1
value: -1.2895779049988085
- type: nauc_recall_at_5_max
value: 14.974218341119162
- type: nauc_recall_at_5_std
value: 11.278321881932376
- type: ndcg_at_1
value: 41.564
- type: ndcg_at_10
value: 41.177
- type: ndcg_at_100
value: 49.036
- type: ndcg_at_1000
value: 51.864
- type: ndcg_at_20
value: 44.535000000000004
- type: ndcg_at_3
value: 34.183
- type: ndcg_at_5
value: 36.636
- type: precision_at_1
value: 41.564
- type: precision_at_10
value: 12.886000000000001
- type: precision_at_100
value: 2.145
- type: precision_at_1000
value: 0.268
- type: precision_at_20
value: 7.922
- type: precision_at_3
value: 25.668000000000003
- type: precision_at_5
value: 19.713
- type: recall_at_1
value: 17.376
- type: recall_at_10
value: 48.116
- type: recall_at_100
value: 73.983
- type: recall_at_1000
value: 89.557
- type: recall_at_20
value: 57.376000000000005
- type: recall_at_3
value: 30.624000000000002
- type: recall_at_5
value: 38.072
task:
type: Retrieval
- dataset:
config: default
name: MTEB DBPedia (default)
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
split: test
type: mteb/dbpedia
metrics:
- type: main_score
value: 48.979
- type: map_at_1
value: 9.858
- type: map_at_10
value: 22.772000000000002
- type: map_at_100
value: 32.067
- type: map_at_1000
value: 33.789
- type: map_at_20
value: 26.6
- type: map_at_3
value: 15.817999999999998
- type: map_at_5
value: 18.88
- type: mrr_at_1
value: 75.75
- type: mrr_at_10
value: 81.4422619047619
- type: mrr_at_100
value: 81.66908663880871
- type: mrr_at_1000
value: 81.67603961133557
- type: mrr_at_20
value: 81.58319354256854
- type: mrr_at_3
value: 79.91666666666667
- type: mrr_at_5
value: 81.05416666666666
- type: nauc_map_at_1000_diff1
value: 23.991232076620477
- type: nauc_map_at_1000_max
value: 35.287316450717924
- type: nauc_map_at_1000_std
value: 19.352326951207623
- type: nauc_map_at_100_diff1
value: 24.340309156988095
- type: nauc_map_at_100_max
value: 35.68099330475215
- type: nauc_map_at_100_std
value: 17.175838739585252
- type: nauc_map_at_10_diff1
value: 21.561923328790826
- type: nauc_map_at_10_max
value: 30.221314679896555
- type: nauc_map_at_10_std
value: -5.171508970583829
- type: nauc_map_at_1_diff1
value: 27.8980502688949
- type: nauc_map_at_1_max
value: 18.011347877902235
- type: nauc_map_at_1_std
value: -21.8828654183319
- type: nauc_map_at_20_diff1
value: 21.97593079179927
- type: nauc_map_at_20_max
value: 33.1027436090741
- type: nauc_map_at_20_std
value: 3.8376582930602887
- type: nauc_map_at_3_diff1
value: 23.696811666749362
- type: nauc_map_at_3_max
value: 25.004984475522406
- type: nauc_map_at_3_std
value: -16.036281146384134
- type: nauc_map_at_5_diff1
value: 22.303297302695672
- type: nauc_map_at_5_max
value: 25.908488411484537
- type: nauc_map_at_5_std
value: -12.727467597748399
- type: nauc_mrr_at_1000_diff1
value: 50.807506669263105
- type: nauc_mrr_at_1000_max
value: 59.521138888646895
- type: nauc_mrr_at_1000_std
value: 39.72453658171713
- type: nauc_mrr_at_100_diff1
value: 50.809052816882414
- type: nauc_mrr_at_100_max
value: 59.52443036190528
- type: nauc_mrr_at_100_std
value: 39.71360790190832
- type: nauc_mrr_at_10_diff1
value: 50.71551464513347
- type: nauc_mrr_at_10_max
value: 59.46887584854914
- type: nauc_mrr_at_10_std
value: 39.720073174909146
- type: nauc_mrr_at_1_diff1
value: 51.23431960913661
- type: nauc_mrr_at_1_max
value: 59.02477220193181
- type: nauc_mrr_at_1_std
value: 37.613094567706604
- type: nauc_mrr_at_20_diff1
value: 50.68567900468689
- type: nauc_mrr_at_20_max
value: 59.398702247575116
- type: nauc_mrr_at_20_std
value: 39.84349342123071
- type: nauc_mrr_at_3_diff1
value: 50.84159182980731
- type: nauc_mrr_at_3_max
value: 59.586303879639814
- type: nauc_mrr_at_3_std
value: 39.115703986532054
- type: nauc_mrr_at_5_diff1
value: 50.9427075304326
- type: nauc_mrr_at_5_max
value: 59.9197314639652
- type: nauc_mrr_at_5_std
value: 40.03939021575725
- type: nauc_ndcg_at_1000_diff1
value: 35.299374382112134
- type: nauc_ndcg_at_1000_max
value: 42.17483524995039
- type: nauc_ndcg_at_1000_std
value: 36.65033986688723
- type: nauc_ndcg_at_100_diff1
value: 34.44823939199226
- type: nauc_ndcg_at_100_max
value: 41.7528959441004
- type: nauc_ndcg_at_100_std
value: 28.72365119802961
- type: nauc_ndcg_at_10_diff1
value: 29.32293547048091
- type: nauc_ndcg_at_10_max
value: 40.101679400646006
- type: nauc_ndcg_at_10_std
value: 26.5721071370353
- type: nauc_ndcg_at_1_diff1
value: 48.319456575299284
- type: nauc_ndcg_at_1_max
value: 48.27377641677222
- type: nauc_ndcg_at_1_std
value: 29.76971701564757
- type: nauc_ndcg_at_20_diff1
value: 30.927032015266835
- type: nauc_ndcg_at_20_max
value: 40.52043580178855
- type: nauc_ndcg_at_20_std
value: 25.197926348678955
- type: nauc_ndcg_at_3_diff1
value: 33.082418428993115
- type: nauc_ndcg_at_3_max
value: 40.62252050374572
- type: nauc_ndcg_at_3_std
value: 28.113380979394726
- type: nauc_ndcg_at_5_diff1
value: 29.635117682340617
- type: nauc_ndcg_at_5_max
value: 38.11353464984394
- type: nauc_ndcg_at_5_std
value: 28.33324261545152
- type: nauc_precision_at_1000_diff1
value: -18.548687962963978
- type: nauc_precision_at_1000_max
value: -14.491062706051878
- type: nauc_precision_at_1000_std
value: 10.681709294585238
- type: nauc_precision_at_100_diff1
value: -1.688856131371092
- type: nauc_precision_at_100_max
value: 5.481319501702683
- type: nauc_precision_at_100_std
value: 39.979879645237446
- type: nauc_precision_at_10_diff1
value: 0.576840213887176
- type: nauc_precision_at_10_max
value: 18.614962845466955
- type: nauc_precision_at_10_std
value: 42.024684223351464
- type: nauc_precision_at_1_diff1
value: 51.23431960913661
- type: nauc_precision_at_1_max
value: 59.02477220193181
- type: nauc_precision_at_1_std
value: 37.613094567706604
- type: nauc_precision_at_20_diff1
value: 1.3707715784045262
- type: nauc_precision_at_20_max
value: 14.922028634512083
- type: nauc_precision_at_20_std
value: 44.76530134675204
- type: nauc_precision_at_3_diff1
value: 13.094243395849992
- type: nauc_precision_at_3_max
value: 29.850584449565037
- type: nauc_precision_at_3_std
value: 35.77371986318991
- type: nauc_precision_at_5_diff1
value: 6.798339179999441
- type: nauc_precision_at_5_max
value: 23.08541604839939
- type: nauc_precision_at_5_std
value: 40.28922731098164
- type: nauc_recall_at_1000_diff1
value: 27.24738341174725
- type: nauc_recall_at_1000_max
value: 31.09981332493123
- type: nauc_recall_at_1000_std
value: 41.96422474881881
- type: nauc_recall_at_100_diff1
value: 24.922315595458294
- type: nauc_recall_at_100_max
value: 32.53690673184911
- type: nauc_recall_at_100_std
value: 23.02548177144121
- type: nauc_recall_at_10_diff1
value: 15.873395525740868
- type: nauc_recall_at_10_max
value: 23.963191643746132
- type: nauc_recall_at_10_std
value: -7.2368622521479296
- type: nauc_recall_at_1_diff1
value: 27.8980502688949
- type: nauc_recall_at_1_max
value: 18.011347877902235
- type: nauc_recall_at_1_std
value: -21.8828654183319
- type: nauc_recall_at_20_diff1
value: 17.63321564134115
- type: nauc_recall_at_20_max
value: 27.284001947728797
- type: nauc_recall_at_20_std
value: 2.3851101283717666
- type: nauc_recall_at_3_diff1
value: 21.72291192189032
- type: nauc_recall_at_3_max
value: 23.109590882141113
- type: nauc_recall_at_3_std
value: -16.34348495895044
- type: nauc_recall_at_5_diff1
value: 17.596468253564954
- type: nauc_recall_at_5_max
value: 20.664891173216603
- type: nauc_recall_at_5_std
value: -14.565623699193717
- type: ndcg_at_1
value: 65.125
- type: ndcg_at_10
value: 48.979
- type: ndcg_at_100
value: 52.317
- type: ndcg_at_1000
value: 59.424
- type: ndcg_at_20
value: 47.806
- type: ndcg_at_3
value: 54.032000000000004
- type: ndcg_at_5
value: 51.520999999999994
- type: precision_at_1
value: 75.75
- type: precision_at_10
value: 38.975
- type: precision_at_100
value: 11.848
- type: precision_at_1000
value: 2.199
- type: precision_at_20
value: 29.387
- type: precision_at_3
value: 57.333
- type: precision_at_5
value: 50.0
- type: recall_at_1
value: 9.858
- type: recall_at_10
value: 28.061999999999998
- type: recall_at_100
value: 56.413000000000004
- type: recall_at_1000
value: 79.963
- type: recall_at_20
value: 36.161
- type: recall_at_3
value: 16.631
- type: recall_at_5
value: 21.363
task:
type: Retrieval
- dataset:
config: default
name: MTEB EmotionClassification (default)
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
split: test
type: mteb/emotion
metrics:
- type: accuracy
value: 79.985
- type: f1
value: 74.58481640194556
- type: f1_weighted
value: 80.85307620086522
- type: main_score
value: 79.985
task:
type: Classification
- dataset:
config: default
name: MTEB FEVER (default)
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
split: test
type: mteb/fever
metrics:
- type: main_score
value: 88.228
- type: map_at_1
value: 75.053
- type: map_at_10
value: 84.631
- type: map_at_100
value: 84.832
- type: map_at_1000
value: 84.844
- type: map_at_20
value: 84.756
- type: map_at_3
value: 83.553
- type: map_at_5
value: 84.273
- type: mrr_at_1
value: 80.993099309931
- type: mrr_at_10
value: 88.46116754532582
- type: mrr_at_100
value: 88.51308773245357
- type: mrr_at_1000
value: 88.51396221984112
- type: mrr_at_20
value: 88.49642168590128
- type: mrr_at_3
value: 87.86128612861275
- type: mrr_at_5
value: 88.30083008300812
- type: nauc_map_at_1000_diff1
value: 53.97823649031981
- type: nauc_map_at_1000_max
value: 26.11243091917843
- type: nauc_map_at_1000_std
value: -10.057644234421986
- type: nauc_map_at_100_diff1
value: 53.925080266687445
- type: nauc_map_at_100_max
value: 26.074046044483406
- type: nauc_map_at_100_std
value: -10.057139918091936
- type: nauc_map_at_10_diff1
value: 53.378232678228535
- type: nauc_map_at_10_max
value: 25.583629956942904
- type: nauc_map_at_10_std
value: -10.296034633396092
- type: nauc_map_at_1_diff1
value: 60.507796141511605
- type: nauc_map_at_1_max
value: 24.81979211893891
- type: nauc_map_at_1_std
value: -15.864717081534302
- type: nauc_map_at_20_diff1
value: 53.712573269726484
- type: nauc_map_at_20_max
value: 25.870196380003335
- type: nauc_map_at_20_std
value: -10.139248046597455
- type: nauc_map_at_3_diff1
value: 53.261264809399556
- type: nauc_map_at_3_max
value: 25.65803011606916
- type: nauc_map_at_3_std
value: -10.953616682218243
- type: nauc_map_at_5_diff1
value: 53.17212766431546
- type: nauc_map_at_5_max
value: 25.60582034909538
- type: nauc_map_at_5_std
value: -10.32613724902313
- type: nauc_mrr_at_1000_diff1
value: 70.38955167949939
- type: nauc_mrr_at_1000_max
value: 39.821515037282204
- type: nauc_mrr_at_1000_std
value: -9.98013185324074
- type: nauc_mrr_at_100_diff1
value: 70.38352452325266
- type: nauc_mrr_at_100_max
value: 39.82466363867733
- type: nauc_mrr_at_100_std
value: -9.976145831114493
- type: nauc_mrr_at_10_diff1
value: 70.26683508867457
- type: nauc_mrr_at_10_max
value: 39.80122496712571
- type: nauc_mrr_at_10_std
value: -9.909384325865775
- type: nauc_mrr_at_1_diff1
value: 73.24890171347613
- type: nauc_mrr_at_1_max
value: 37.367459553642426
- type: nauc_mrr_at_1_std
value: -13.316391532791135
- type: nauc_mrr_at_20_diff1
value: 70.34500637714407
- type: nauc_mrr_at_20_max
value: 39.84118580511733
- type: nauc_mrr_at_20_std
value: -9.920771311393942
- type: nauc_mrr_at_3_diff1
value: 70.04420618345499
- type: nauc_mrr_at_3_max
value: 40.33885175872482
- type: nauc_mrr_at_3_std
value: -9.2308606747524
- type: nauc_mrr_at_5_diff1
value: 70.23298852823912
- type: nauc_mrr_at_5_max
value: 40.28613289657475
- type: nauc_mrr_at_5_std
value: -9.408644815171415
- type: nauc_ndcg_at_1000_diff1
value: 56.14884407654613
- type: nauc_ndcg_at_1000_max
value: 29.027269391217793
- type: nauc_ndcg_at_1000_std
value: -8.185655036370417
- type: nauc_ndcg_at_100_diff1
value: 54.898228209830854
- type: nauc_ndcg_at_100_max
value: 28.23127072967732
- type: nauc_ndcg_at_100_std
value: -7.937951960666996
- type: nauc_ndcg_at_10_diff1
value: 52.76884326536276
- type: nauc_ndcg_at_10_max
value: 26.501133559532004
- type: nauc_ndcg_at_10_std
value: -8.561291306720568
- type: nauc_ndcg_at_1_diff1
value: 73.24890171347613
- type: nauc_ndcg_at_1_max
value: 37.367459553642426
- type: nauc_ndcg_at_1_std
value: -13.316391532791135
- type: nauc_ndcg_at_20_diff1
value: 53.782879241534154
- type: nauc_ndcg_at_20_max
value: 27.344714620733146
- type: nauc_ndcg_at_20_std
value: -8.174365511016143
- type: nauc_ndcg_at_3_diff1
value: 54.07748391367295
- type: nauc_ndcg_at_3_max
value: 28.740769448822867
- type: nauc_ndcg_at_3_std
value: -8.800638719106981
- type: nauc_ndcg_at_5_diff1
value: 52.91102194973326
- type: nauc_ndcg_at_5_max
value: 27.35297204098582
- type: nauc_ndcg_at_5_std
value: -8.202780538104845
- type: nauc_precision_at_1000_diff1
value: -6.462960135986346
- type: nauc_precision_at_1000_max
value: 12.759892798322381
- type: nauc_precision_at_1000_std
value: 17.830413795603956
- type: nauc_precision_at_100_diff1
value: -10.714161244793623
- type: nauc_precision_at_100_max
value: 10.80916133379338
- type: nauc_precision_at_100_std
value: 21.01280694690889
- type: nauc_precision_at_10_diff1
value: -12.867253218059915
- type: nauc_precision_at_10_max
value: 9.575643543429718
- type: nauc_precision_at_10_std
value: 21.405171955259224
- type: nauc_precision_at_1_diff1
value: 73.24890171347613
- type: nauc_precision_at_1_max
value: 37.367459553642426
- type: nauc_precision_at_1_std
value: -13.316391532791135
- type: nauc_precision_at_20_diff1
value: -11.766460335141424
- type: nauc_precision_at_20_max
value: 10.17190973145006
- type: nauc_precision_at_20_std
value: 21.752924700590835
- type: nauc_precision_at_3_diff1
value: 5.241669513189873
- type: nauc_precision_at_3_max
value: 21.722890037760354
- type: nauc_precision_at_3_std
value: 16.83232605784222
- type: nauc_precision_at_5_diff1
value: -6.750151592516413
- type: nauc_precision_at_5_max
value: 15.059744329415048
- type: nauc_precision_at_5_std
value: 21.831836531443653
- type: nauc_recall_at_1000_diff1
value: 8.852828649246417
- type: nauc_recall_at_1000_max
value: 6.683830914994345
- type: nauc_recall_at_1000_std
value: 37.66593889403836
- type: nauc_recall_at_100_diff1
value: -2.4986179820673344
- type: nauc_recall_at_100_max
value: -1.230471742842536
- type: nauc_recall_at_100_std
value: 22.724612835383482
- type: nauc_recall_at_10_diff1
value: 8.921193487520886
- type: nauc_recall_at_10_max
value: 1.4012350766088484
- type: nauc_recall_at_10_std
value: 2.9284367419689037
- type: nauc_recall_at_1_diff1
value: 60.507796141511605
- type: nauc_recall_at_1_max
value: 24.81979211893891
- type: nauc_recall_at_1_std
value: -15.864717081534302
- type: nauc_recall_at_20_diff1
value: 6.778598529994739
- type: nauc_recall_at_20_max
value: 1.9108915219621572
- type: nauc_recall_at_20_std
value: 9.15003581851989
- type: nauc_recall_at_3_diff1
value: 30.17670764440773
- type: nauc_recall_at_3_max
value: 17.769313053478434
- type: nauc_recall_at_3_std
value: -2.7848998516990386
- type: nauc_recall_at_5_diff1
value: 19.986644381812553
- type: nauc_recall_at_5_max
value: 11.751813635626322
- type: nauc_recall_at_5_std
value: 1.6890369172263033
- type: ndcg_at_1
value: 80.99300000000001
- type: ndcg_at_10
value: 88.228
- type: ndcg_at_100
value: 88.897
- type: ndcg_at_1000
value: 89.093
- type: ndcg_at_20
value: 88.542
- type: ndcg_at_3
value: 86.687
- type: ndcg_at_5
value: 87.607
- type: precision_at_1
value: 80.99300000000001
- type: precision_at_10
value: 10.707
- type: precision_at_100
value: 1.127
- type: precision_at_1000
value: 0.116
- type: precision_at_20
value: 5.457999999999999
- type: precision_at_3
value: 33.538000000000004
- type: precision_at_5
value: 20.801
- type: recall_at_1
value: 75.053
- type: recall_at_10
value: 95.27799999999999
- type: recall_at_100
value: 97.853
- type: recall_at_1000
value: 99.03800000000001
- type: recall_at_20
value: 96.318
- type: recall_at_3
value: 91.08000000000001
- type: recall_at_5
value: 93.45400000000001
task:
type: Retrieval
- dataset:
config: default
name: MTEB FiQA2018 (default)
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
split: test
type: mteb/fiqa
metrics:
- type: main_score
value: 54.071999999999996
- type: map_at_1
value: 27.345000000000002
- type: map_at_10
value: 45.694
- type: map_at_100
value: 47.949999999999996
- type: map_at_1000
value: 48.093
- type: map_at_20
value: 47.035
- type: map_at_3
value: 40.049
- type: map_at_5
value: 42.92
- type: mrr_at_1
value: 52.160493827160494
- type: mrr_at_10
value: 61.28527336860672
- type: mrr_at_100
value: 61.884625221750596
- type: mrr_at_1000
value: 61.915904963540726
- type: mrr_at_20
value: 61.667012493286734
- type: mrr_at_3
value: 59.10493827160492
- type: mrr_at_5
value: 60.362654320987616
- type: nauc_map_at_1000_diff1
value: 45.38067605515959
- type: nauc_map_at_1000_max
value: 37.86197840734717
- type: nauc_map_at_1000_std
value: -10.88609599497855
- type: nauc_map_at_100_diff1
value: 45.34384139935809
- type: nauc_map_at_100_max
value: 37.79374163212799
- type: nauc_map_at_100_std
value: -10.800059266281165
- type: nauc_map_at_10_diff1
value: 45.3913490132632
- type: nauc_map_at_10_max
value: 36.79840356578914
- type: nauc_map_at_10_std
value: -12.036133364054884
- type: nauc_map_at_1_diff1
value: 51.66552774401746
- type: nauc_map_at_1_max
value: 25.324194752193236
- type: nauc_map_at_1_std
value: -14.34697090462958
- type: nauc_map_at_20_diff1
value: 45.27320308873338
- type: nauc_map_at_20_max
value: 37.29442746411085
- type: nauc_map_at_20_std
value: -11.635204276133472
- type: nauc_map_at_3_diff1
value: 46.88138818586725
- type: nauc_map_at_3_max
value: 32.99288436262902
- type: nauc_map_at_3_std
value: -13.639274978165444
- type: nauc_map_at_5_diff1
value: 45.76135530895121
- type: nauc_map_at_5_max
value: 34.97804527762444
- type: nauc_map_at_5_std
value: -12.678346477642899
- type: nauc_mrr_at_1000_diff1
value: 53.864293429447955
- type: nauc_mrr_at_1000_max
value: 45.79808916389802
- type: nauc_mrr_at_1000_std
value: -9.713381409523494
- type: nauc_mrr_at_100_diff1
value: 53.85134409074757
- type: nauc_mrr_at_100_max
value: 45.80389587114905
- type: nauc_mrr_at_100_std
value: -9.683169165384212
- type: nauc_mrr_at_10_diff1
value: 53.805490205878
- type: nauc_mrr_at_10_max
value: 45.806760270208564
- type: nauc_mrr_at_10_std
value: -9.76722195012393
- type: nauc_mrr_at_1_diff1
value: 56.27330361790344
- type: nauc_mrr_at_1_max
value: 47.01503122847836
- type: nauc_mrr_at_1_std
value: -10.774154484447495
- type: nauc_mrr_at_20_diff1
value: 53.83482468037953
- type: nauc_mrr_at_20_max
value: 45.719679695052974
- type: nauc_mrr_at_20_std
value: -9.77923533594551
- type: nauc_mrr_at_3_diff1
value: 54.44641861789147
- type: nauc_mrr_at_3_max
value: 45.94905694818705
- type: nauc_mrr_at_3_std
value: -11.177467065728768
- type: nauc_mrr_at_5_diff1
value: 54.09429588760707
- type: nauc_mrr_at_5_max
value: 46.004166041517216
- type: nauc_mrr_at_5_std
value: -9.769538819499722
- type: nauc_ndcg_at_1000_diff1
value: 46.80179242198247
- type: nauc_ndcg_at_1000_max
value: 40.806989668058186
- type: nauc_ndcg_at_1000_std
value: -8.015013067414483
- type: nauc_ndcg_at_100_diff1
value: 46.26031710590574
- type: nauc_ndcg_at_100_max
value: 40.2874844490879
- type: nauc_ndcg_at_100_std
value: -6.325738537481981
- type: nauc_ndcg_at_10_diff1
value: 46.0597385861321
- type: nauc_ndcg_at_10_max
value: 38.12369512757341
- type: nauc_ndcg_at_10_std
value: -9.95387894167638
- type: nauc_ndcg_at_1_diff1
value: 56.27330361790344
- type: nauc_ndcg_at_1_max
value: 47.01503122847836
- type: nauc_ndcg_at_1_std
value: -10.774154484447495
- type: nauc_ndcg_at_20_diff1
value: 46.112983276165046
- type: nauc_ndcg_at_20_max
value: 38.60654549021085
- type: nauc_ndcg_at_20_std
value: -9.66055049547148
- type: nauc_ndcg_at_3_diff1
value: 46.07426386701122
- type: nauc_ndcg_at_3_max
value: 39.30739016101109
- type: nauc_ndcg_at_3_std
value: -12.50493736255984
- type: nauc_ndcg_at_5_diff1
value: 45.71298951268576
- type: nauc_ndcg_at_5_max
value: 37.27961846995706
- type: nauc_ndcg_at_5_std
value: -11.154006989020496
- type: nauc_precision_at_1000_diff1
value: -11.592438042119445
- type: nauc_precision_at_1000_max
value: 17.294449668418288
- type: nauc_precision_at_1000_std
value: 9.709962161201647
- type: nauc_precision_at_100_diff1
value: -6.0095430176874345
- type: nauc_precision_at_100_max
value: 22.901828845166698
- type: nauc_precision_at_100_std
value: 14.993379617197682
- type: nauc_precision_at_10_diff1
value: 6.719203274493172
- type: nauc_precision_at_10_max
value: 32.512145720381795
- type: nauc_precision_at_10_std
value: 5.244187871424349
- type: nauc_precision_at_1_diff1
value: 56.27330361790344
- type: nauc_precision_at_1_max
value: 47.01503122847836
- type: nauc_precision_at_1_std
value: -10.774154484447495
- type: nauc_precision_at_20_diff1
value: 1.754389508301811
- type: nauc_precision_at_20_max
value: 29.02035054956672
- type: nauc_precision_at_20_std
value: 8.161759871402037
- type: nauc_precision_at_3_diff1
value: 24.040968725090252
- type: nauc_precision_at_3_max
value: 40.10318275587437
- type: nauc_precision_at_3_std
value: -3.878413890678057
- type: nauc_precision_at_5_diff1
value: 15.218812798552142
- type: nauc_precision_at_5_max
value: 37.25953351705925
- type: nauc_precision_at_5_std
value: 0.7155796998283327
- type: nauc_recall_at_1000_diff1
value: 10.583253250637997
- type: nauc_recall_at_1000_max
value: -3.5637377831543846
- type: nauc_recall_at_1000_std
value: 34.74872993454209
- type: nauc_recall_at_100_diff1
value: 26.680647396718747
- type: nauc_recall_at_100_max
value: 25.289227360067045
- type: nauc_recall_at_100_std
value: 19.215575737374877
- type: nauc_recall_at_10_diff1
value: 35.49850774071538
- type: nauc_recall_at_10_max
value: 27.12975488283297
- type: nauc_recall_at_10_std
value: -6.7757139852899995
- type: nauc_recall_at_1_diff1
value: 51.66552774401746
- type: nauc_recall_at_1_max
value: 25.324194752193236
- type: nauc_recall_at_1_std
value: -14.34697090462958
- type: nauc_recall_at_20_diff1
value: 33.87213110916921
- type: nauc_recall_at_20_max
value: 25.15617289177912
- type: nauc_recall_at_20_std
value: -6.44141075455468
- type: nauc_recall_at_3_diff1
value: 42.167552979112784
- type: nauc_recall_at_3_max
value: 26.47073745859859
- type: nauc_recall_at_3_std
value: -13.151450499133057
- type: nauc_recall_at_5_diff1
value: 38.5058386963604
- type: nauc_recall_at_5_max
value: 26.128698034399218
- type: nauc_recall_at_5_std
value: -8.92423552488776
- type: ndcg_at_1
value: 52.16
- type: ndcg_at_10
value: 54.071999999999996
- type: ndcg_at_100
value: 60.851
- type: ndcg_at_1000
value: 62.907999999999994
- type: ndcg_at_20
value: 57.001000000000005
- type: ndcg_at_3
value: 49.712
- type: ndcg_at_5
value: 50.791
- type: precision_at_1
value: 52.16
- type: precision_at_10
value: 15.062000000000001
- type: precision_at_100
value: 2.218
- type: precision_at_1000
value: 0.258
- type: precision_at_20
value: 8.827
- type: precision_at_3
value: 33.282000000000004
- type: precision_at_5
value: 24.012
- type: recall_at_1
value: 27.345000000000002
- type: recall_at_10
value: 61.846999999999994
- type: recall_at_100
value: 86.125
- type: recall_at_1000
value: 98.13199999999999
- type: recall_at_20
value: 70.545
- type: recall_at_3
value: 45.446
- type: recall_at_5
value: 52.031000000000006
task:
type: Retrieval
- dataset:
config: default
name: MTEB HotpotQA (default)
revision: ab518f4d6fcca38d87c25209f94beba119d02014
split: test
type: mteb/hotpotqa
metrics:
- type: main_score
value: 74.959
- type: map_at_1
value: 39.507
- type: map_at_10
value: 67.368
- type: map_at_100
value: 68.208
- type: map_at_1000
value: 68.258
- type: map_at_20
value: 67.9
- type: map_at_3
value: 63.695
- type: map_at_5
value: 66.069
- type: mrr_at_1
value: 79.01417960837273
- type: mrr_at_10
value: 85.29256294009818
- type: mrr_at_100
value: 85.43598687762321
- type: mrr_at_1000
value: 85.44005185885888
- type: mrr_at_20
value: 85.385908910377
- type: mrr_at_3
value: 84.41368444744523
- type: mrr_at_5
value: 84.9990997074046
- type: nauc_map_at_1000_diff1
value: 21.12170489619495
- type: nauc_map_at_1000_max
value: 26.275894183746722
- type: nauc_map_at_1000_std
value: -6.270422764724773
- type: nauc_map_at_100_diff1
value: 21.100748412891427
- type: nauc_map_at_100_max
value: 26.267357900952376
- type: nauc_map_at_100_std
value: -6.244347667315573
- type: nauc_map_at_10_diff1
value: 20.674777133569105
- type: nauc_map_at_10_max
value: 26.0464950302885
- type: nauc_map_at_10_std
value: -6.879486555235194
- type: nauc_map_at_1_diff1
value: 61.578918111691614
- type: nauc_map_at_1_max
value: 42.809228851971554
- type: nauc_map_at_1_std
value: -18.693501607160478
- type: nauc_map_at_20_diff1
value: 21.016679127441627
- type: nauc_map_at_20_max
value: 26.26493055547197
- type: nauc_map_at_20_std
value: -6.348265956664924
- type: nauc_map_at_3_diff1
value: 19.211524514269673
- type: nauc_map_at_3_max
value: 25.179630796295072
- type: nauc_map_at_3_std
value: -9.469682815051597
- type: nauc_map_at_5_diff1
value: 19.802257269903983
- type: nauc_map_at_5_max
value: 25.843065189828675
- type: nauc_map_at_5_std
value: -7.6911117288836275
- type: nauc_mrr_at_1000_diff1
value: 60.90611255392621
- type: nauc_mrr_at_1000_max
value: 45.28902337460921
- type: nauc_mrr_at_1000_std
value: -15.081836800607629
- type: nauc_mrr_at_100_diff1
value: 60.906319613903634
- type: nauc_mrr_at_100_max
value: 45.294454122569135
- type: nauc_mrr_at_100_std
value: -15.070354934845525
- type: nauc_mrr_at_10_diff1
value: 60.89081258769886
- type: nauc_mrr_at_10_max
value: 45.340063090713706
- type: nauc_mrr_at_10_std
value: -15.019436328769977
- type: nauc_mrr_at_1_diff1
value: 61.578918111691614
- type: nauc_mrr_at_1_max
value: 42.809228851971554
- type: nauc_mrr_at_1_std
value: -18.693501607160478
- type: nauc_mrr_at_20_diff1
value: 60.91444288979141
- type: nauc_mrr_at_20_max
value: 45.31431373445948
- type: nauc_mrr_at_20_std
value: -14.97309014683095
- type: nauc_mrr_at_3_diff1
value: 60.772894031312696
- type: nauc_mrr_at_3_max
value: 45.605293386022225
- type: nauc_mrr_at_3_std
value: -15.391241831624658
- type: nauc_mrr_at_5_diff1
value: 60.71990183490615
- type: nauc_mrr_at_5_max
value: 45.478031078283045
- type: nauc_mrr_at_5_std
value: -15.099732959629012
- type: nauc_ndcg_at_1000_diff1
value: 27.86370916809549
- type: nauc_ndcg_at_1000_max
value: 29.961195201820917
- type: nauc_ndcg_at_1000_std
value: -3.669547648606182
- type: nauc_ndcg_at_100_diff1
value: 27.222363197903203
- type: nauc_ndcg_at_100_max
value: 29.83590955603319
- type: nauc_ndcg_at_100_std
value: -2.706914023646432
- type: nauc_ndcg_at_10_diff1
value: 25.720275283710905
- type: nauc_ndcg_at_10_max
value: 29.099451842124513
- type: nauc_ndcg_at_10_std
value: -4.974149196543083
- type: nauc_ndcg_at_1_diff1
value: 61.578918111691614
- type: nauc_ndcg_at_1_max
value: 42.809228851971554
- type: nauc_ndcg_at_1_std
value: -18.693501607160478
- type: nauc_ndcg_at_20_diff1
value: 26.6414778719889
- type: nauc_ndcg_at_20_max
value: 29.7153470420483
- type: nauc_ndcg_at_20_std
value: -3.323674164247545
- type: nauc_ndcg_at_3_diff1
value: 23.854705547556676
- type: nauc_ndcg_at_3_max
value: 28.16819582399863
- type: nauc_ndcg_at_3_std
value: -9.175742937548364
- type: nauc_ndcg_at_5_diff1
value: 24.235289969946336
- type: nauc_ndcg_at_5_max
value: 28.837159697000736
- type: nauc_ndcg_at_5_std
value: -6.6312641809059825
- type: nauc_precision_at_1000_diff1
value: 15.588021360728687
- type: nauc_precision_at_1000_max
value: 22.39953961246837
- type: nauc_precision_at_1000_std
value: 47.68406787651948
- type: nauc_precision_at_100_diff1
value: 14.082191847912181
- type: nauc_precision_at_100_max
value: 24.398280717374227
- type: nauc_precision_at_100_std
value: 29.845964300686106
- type: nauc_precision_at_10_diff1
value: 14.078430107561424
- type: nauc_precision_at_10_max
value: 24.03621964514711
- type: nauc_precision_at_10_std
value: 6.216273371941104
- type: nauc_precision_at_1_diff1
value: 61.578918111691614
- type: nauc_precision_at_1_max
value: 42.809228851971554
- type: nauc_precision_at_1_std
value: -18.693501607160478
- type: nauc_precision_at_20_diff1
value: 15.305783955465262
- type: nauc_precision_at_20_max
value: 25.331504698917186
- type: nauc_precision_at_20_std
value: 14.995465986068544
- type: nauc_precision_at_3_diff1
value: 13.428353704090718
- type: nauc_precision_at_3_max
value: 24.235140590205866
- type: nauc_precision_at_3_std
value: -5.482186394535428
- type: nauc_precision_at_5_diff1
value: 12.446233438129173
- type: nauc_precision_at_5_max
value: 24.572973116392642
- type: nauc_precision_at_5_std
value: 0.24277662503234543
- type: nauc_recall_at_1000_diff1
value: 15.588021360729346
- type: nauc_recall_at_1000_max
value: 22.399539612468512
- type: nauc_recall_at_1000_std
value: 47.684067876519485
- type: nauc_recall_at_100_diff1
value: 14.082191847912085
- type: nauc_recall_at_100_max
value: 24.398280717374345
- type: nauc_recall_at_100_std
value: 29.845964300686095
- type: nauc_recall_at_10_diff1
value: 14.078430107561557
- type: nauc_recall_at_10_max
value: 24.03621964514713
- type: nauc_recall_at_10_std
value: 6.216273371941132
- type: nauc_recall_at_1_diff1
value: 61.578918111691614
- type: nauc_recall_at_1_max
value: 42.809228851971554
- type: nauc_recall_at_1_std
value: -18.693501607160478
- type: nauc_recall_at_20_diff1
value: 15.30578395546536
- type: nauc_recall_at_20_max
value: 25.33150469891727
- type: nauc_recall_at_20_std
value: 14.995465986068684
- type: nauc_recall_at_3_diff1
value: 13.428353704090698
- type: nauc_recall_at_3_max
value: 24.235140590205813
- type: nauc_recall_at_3_std
value: -5.482186394535521
- type: nauc_recall_at_5_diff1
value: 12.446233438129164
- type: nauc_recall_at_5_max
value: 24.572973116392614
- type: nauc_recall_at_5_std
value: 0.242776625032411
- type: ndcg_at_1
value: 79.014
- type: ndcg_at_10
value: 74.959
- type: ndcg_at_100
value: 77.70700000000001
- type: ndcg_at_1000
value: 78.628
- type: ndcg_at_20
value: 76.23400000000001
- type: ndcg_at_3
value: 69.891
- type: ndcg_at_5
value: 72.82600000000001
- type: precision_at_1
value: 79.014
- type: precision_at_10
value: 15.946
- type: precision_at_100
value: 1.806
- type: precision_at_1000
value: 0.193
- type: precision_at_20
value: 8.381
- type: precision_at_3
value: 45.726
- type: precision_at_5
value: 29.75
- type: recall_at_1
value: 39.507
- type: recall_at_10
value: 79.73
- type: recall_at_100
value: 90.28399999999999
- type: recall_at_1000
value: 96.327
- type: recall_at_20
value: 83.815
- type: recall_at_3
value: 68.589
- type: recall_at_5
value: 74.375
task:
type: Retrieval
- dataset:
config: default
name: MTEB ImdbClassification (default)
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
split: test
type: mteb/imdb
metrics:
- type: accuracy
value: 96.6444
- type: ap
value: 94.71353946976426
- type: ap_weighted
value: 94.71353946976426
- type: f1
value: 96.64368622221421
- type: f1_weighted
value: 96.64368622221419
- type: main_score
value: 96.6444
task:
type: Classification
- dataset:
config: default
name: MTEB MSMARCO (default)
revision: c5a29a104738b98a9e76336939199e264163d4a0
split: dev
type: mteb/msmarco
metrics:
- type: main_score
value: 42.545
- type: map_at_1
value: 22.655
- type: map_at_10
value: 35.467999999999996
- type: map_at_100
value: 36.652
- type: map_at_1000
value: 36.692
- type: map_at_20
value: 36.236000000000004
- type: map_at_3
value: 31.485000000000003
- type: map_at_5
value: 33.908
- type: mrr_at_1
value: 23.280802292263612
- type: mrr_at_10
value: 36.02329217264742
- type: mrr_at_100
value: 37.148118537446415
- type: mrr_at_1000
value: 37.183801105059956
- type: mrr_at_20
value: 36.76525675340281
- type: mrr_at_3
value: 32.096466093600654
- type: mrr_at_5
value: 34.50334288443163
- type: nauc_map_at_1000_diff1
value: 34.520324529857184
- type: nauc_map_at_1000_max
value: 35.326534835022514
- type: nauc_map_at_1000_std
value: -21.366160566488187
- type: nauc_map_at_100_diff1
value: 34.51815749165448
- type: nauc_map_at_100_max
value: 35.36490672794807
- type: nauc_map_at_100_std
value: -21.34319223709314
- type: nauc_map_at_10_diff1
value: 34.36733390350321
- type: nauc_map_at_10_max
value: 35.47907368100861
- type: nauc_map_at_10_std
value: -21.932334599571735
- type: nauc_map_at_1_diff1
value: 37.89554066876773
- type: nauc_map_at_1_max
value: 28.579792597905413
- type: nauc_map_at_1_std
value: -20.51606339206856
- type: nauc_map_at_20_diff1
value: 34.51926497566516
- type: nauc_map_at_20_max
value: 35.497148638709895
- type: nauc_map_at_20_std
value: -21.595925239714767
- type: nauc_map_at_3_diff1
value: 34.64924634604746
- type: nauc_map_at_3_max
value: 33.298757220805754
- type: nauc_map_at_3_std
value: -22.44092979514115
- type: nauc_map_at_5_diff1
value: 34.52262452267762
- type: nauc_map_at_5_max
value: 34.993794904126
- type: nauc_map_at_5_std
value: -22.19799323514771
- type: nauc_mrr_at_1000_diff1
value: 34.30028152962552
- type: nauc_mrr_at_1000_max
value: 34.84294030005338
- type: nauc_mrr_at_1000_std
value: -21.3040159303398
- type: nauc_mrr_at_100_diff1
value: 34.29714922716057
- type: nauc_mrr_at_100_max
value: 34.8773691257525
- type: nauc_mrr_at_100_std
value: -21.280800887086606
- type: nauc_mrr_at_10_diff1
value: 34.141133687651255
- type: nauc_mrr_at_10_max
value: 34.97057209823848
- type: nauc_mrr_at_10_std
value: -21.82443447975521
- type: nauc_mrr_at_1_diff1
value: 37.68273289251851
- type: nauc_mrr_at_1_max
value: 28.375793374298752
- type: nauc_mrr_at_1_std
value: -20.548630760150132
- type: nauc_mrr_at_20_diff1
value: 34.29297087665669
- type: nauc_mrr_at_20_max
value: 34.99361503254817
- type: nauc_mrr_at_20_std
value: -21.492481020546688
- type: nauc_mrr_at_3_diff1
value: 34.46337545862703
- type: nauc_mrr_at_3_max
value: 32.91269289394109
- type: nauc_mrr_at_3_std
value: -22.400479840328636
- type: nauc_mrr_at_5_diff1
value: 34.28655737221164
- type: nauc_mrr_at_5_max
value: 34.57504983292885
- type: nauc_mrr_at_5_std
value: -22.11521048383527
- type: nauc_ndcg_at_1000_diff1
value: 33.62874580335025
- type: nauc_ndcg_at_1000_max
value: 37.1872988379906
- type: nauc_ndcg_at_1000_std
value: -19.60332451143694
- type: nauc_ndcg_at_100_diff1
value: 33.5135571710796
- type: nauc_ndcg_at_100_max
value: 38.255675537823564
- type: nauc_ndcg_at_100_std
value: -18.69039354080076
- type: nauc_ndcg_at_10_diff1
value: 33.04700239507369
- type: nauc_ndcg_at_10_max
value: 38.87644726684572
- type: nauc_ndcg_at_10_std
value: -21.761270791633518
- type: nauc_ndcg_at_1_diff1
value: 37.68273289251851
- type: nauc_ndcg_at_1_max
value: 28.375793374298752
- type: nauc_ndcg_at_1_std
value: -20.548630760150132
- type: nauc_ndcg_at_20_diff1
value: 33.59333929099863
- type: nauc_ndcg_at_20_max
value: 39.13869119152796
- type: nauc_ndcg_at_20_std
value: -20.455820914752028
- type: nauc_ndcg_at_3_diff1
value: 33.72195690786571
- type: nauc_ndcg_at_3_max
value: 34.58224856532535
- type: nauc_ndcg_at_3_std
value: -22.932493269664924
- type: nauc_ndcg_at_5_diff1
value: 33.454322211125806
- type: nauc_ndcg_at_5_max
value: 37.62697388354373
- type: nauc_ndcg_at_5_std
value: -22.471519101664132
- type: nauc_precision_at_1000_diff1
value: -4.815785976068792
- type: nauc_precision_at_1000_max
value: -1.6093873845854942
- type: nauc_precision_at_1000_std
value: 14.781030440554144
- type: nauc_precision_at_100_diff1
value: 11.770023400226492
- type: nauc_precision_at_100_max
value: 32.39585905434347
- type: nauc_precision_at_100_std
value: 13.926995268735807
- type: nauc_precision_at_10_diff1
value: 26.033870063028758
- type: nauc_precision_at_10_max
value: 46.706875249128515
- type: nauc_precision_at_10_std
value: -19.221899044995098
- type: nauc_precision_at_1_diff1
value: 37.68273289251851
- type: nauc_precision_at_1_max
value: 28.375793374298752
- type: nauc_precision_at_1_std
value: -20.548630760150132
- type: nauc_precision_at_20_diff1
value: 25.100441174579007
- type: nauc_precision_at_20_max
value: 46.91020875403547
- type: nauc_precision_at_20_std
value: -11.192277515531218
- type: nauc_precision_at_3_diff1
value: 30.618588495438985
- type: nauc_precision_at_3_max
value: 37.248088037331286
- type: nauc_precision_at_3_std
value: -23.92085440457614
- type: nauc_precision_at_5_diff1
value: 29.142344221391838
- type: nauc_precision_at_5_max
value: 43.527892902769175
- type: nauc_precision_at_5_std
value: -22.312841501376514
- type: nauc_recall_at_1000_diff1
value: 12.994211769214695
- type: nauc_recall_at_1000_max
value: 55.743471097359446
- type: nauc_recall_at_1000_std
value: 50.500646267896954
- type: nauc_recall_at_100_diff1
value: 25.84611790014738
- type: nauc_recall_at_100_max
value: 62.84236269533729
- type: nauc_recall_at_100_std
value: 16.99467383693571
- type: nauc_recall_at_10_diff1
value: 28.494332014682527
- type: nauc_recall_at_10_max
value: 50.75293572531052
- type: nauc_recall_at_10_std
value: -20.592936248452297
- type: nauc_recall_at_1_diff1
value: 37.89554066876773
- type: nauc_recall_at_1_max
value: 28.579792597905413
- type: nauc_recall_at_1_std
value: -20.51606339206856
- type: nauc_recall_at_20_diff1
value: 30.144206368539777
- type: nauc_recall_at_20_max
value: 55.78415931465269
- type: nauc_recall_at_20_std
value: -13.536686353112964
- type: nauc_recall_at_3_diff1
value: 31.153704257566993
- type: nauc_recall_at_3_max
value: 38.10114875174283
- type: nauc_recall_at_3_std
value: -24.098427776224725
- type: nauc_recall_at_5_diff1
value: 30.330462760076372
- type: nauc_recall_at_5_max
value: 45.334521843132926
- type: nauc_recall_at_5_std
value: -23.00539480314331
- type: ndcg_at_1
value: 23.281
- type: ndcg_at_10
value: 42.545
- type: ndcg_at_100
value: 48.091
- type: ndcg_at_1000
value: 49.135
- type: ndcg_at_20
value: 45.279
- type: ndcg_at_3
value: 34.507
- type: ndcg_at_5
value: 38.824
- type: precision_at_1
value: 23.281
- type: precision_at_10
value: 6.7250000000000005
- type: precision_at_100
value: 0.947
- type: precision_at_1000
value: 0.104
- type: precision_at_20
value: 3.9309999999999996
- type: precision_at_3
value: 14.771
- type: precision_at_5
value: 11.049000000000001
- type: recall_at_1
value: 22.655
- type: recall_at_10
value: 64.316
- type: recall_at_100
value: 89.596
- type: recall_at_1000
value: 97.627
- type: recall_at_20
value: 74.946
- type: recall_at_3
value: 42.625
- type: recall_at_5
value: 52.967
task:
type: Retrieval
- dataset:
config: en
name: MTEB MTOPDomainClassification (en)
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
split: test
type: mteb/mtop_domain
metrics:
- type: accuracy
value: 97.02462380300956
- type: f1
value: 96.7276440209508
- type: f1_weighted
value: 97.04875399973407
- type: main_score
value: 97.02462380300956
task:
type: Classification
- dataset:
config: en
name: MTEB MTOPIntentClassification (en)
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
split: test
type: mteb/mtop_intent
metrics:
- type: accuracy
value: 87.9252165982672
- type: f1
value: 67.80472291570956
- type: f1_weighted
value: 87.85202683538105
- type: main_score
value: 87.9252165982672
task:
type: Classification
- dataset:
config: en
name: MTEB MassiveIntentClassification (en)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 80.60524546065905
- type: f1
value: 78.33960315662881
- type: f1_weighted
value: 79.95922500362934
- type: main_score
value: 80.60524546065905
task:
type: Classification
- dataset:
config: en
name: MTEB MassiveScenarioClassification (en)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 81.93006052454606
- type: f1
value: 81.2714057686431
- type: f1_weighted
value: 81.85548518216183
- type: main_score
value: 81.93006052454606
task:
type: Classification
- dataset:
config: default
name: MTEB MedrxivClusteringP2P (default)
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
split: test
type: mteb/medrxiv-clustering-p2p
metrics:
- type: main_score
value: 41.53648349744648
- type: v_measure
value: 41.53648349744648
- type: v_measure_std
value: 1.3073220375465617
task:
type: Clustering
- dataset:
config: default
name: MTEB MedrxivClusteringS2S (default)
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
split: test
type: mteb/medrxiv-clustering-s2s
metrics:
- type: main_score
value: 40.53587011806646
- type: v_measure
value: 40.53587011806646
- type: v_measure_std
value: 1.4167198988428324
task:
type: Clustering
- dataset:
config: default
name: MTEB MindSmallReranking (default)
revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
split: test
type: mteb/mind_small
metrics:
- type: main_score
value: 34.12179940649658
- type: map
value: 34.12179940649658
- type: mrr
value: 35.58575183247432
- type: nAUC_map_diff1
value: 18.455263729874243
- type: nAUC_map_max
value: -18.69448732764168
- type: nAUC_map_std
value: 8.198608386567457
- type: nAUC_mrr_diff1
value: 16.22907129322154
- type: nAUC_mrr_max
value: -13.594180628663738
- type: nAUC_mrr_std
value: 8.300689743851711
task:
type: Reranking
- dataset:
config: default
name: MTEB NFCorpus (default)
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
split: test
type: mteb/nfcorpus
metrics:
- type: main_score
value: 37.545
- type: map_at_1
value: 5.949999999999999
- type: map_at_10
value: 13.8
- type: map_at_100
value: 17.653
- type: map_at_1000
value: 19.322
- type: map_at_20
value: 15.318999999999999
- type: map_at_3
value: 10.211
- type: map_at_5
value: 11.757
- type: mrr_at_1
value: 49.84520123839009
- type: mrr_at_10
value: 58.26490245220894
- type: mrr_at_100
value: 58.751461262818694
- type: mrr_at_1000
value: 58.782721242595436
- type: mrr_at_20
value: 58.51537179710553
- type: mrr_at_3
value: 56.24355005159959
- type: mrr_at_5
value: 57.26522187822497
- type: nauc_map_at_1000_diff1
value: 24.708811804064975
- type: nauc_map_at_1000_max
value: 22.51050994461675
- type: nauc_map_at_1000_std
value: 12.29167974269923
- type: nauc_map_at_100_diff1
value: 27.740081813309736
- type: nauc_map_at_100_max
value: 22.220395977232094
- type: nauc_map_at_100_std
value: 8.670978243811184
- type: nauc_map_at_10_diff1
value: 34.4703308010072
- type: nauc_map_at_10_max
value: 18.539226897919768
- type: nauc_map_at_10_std
value: -0.9186030178287692
- type: nauc_map_at_1_diff1
value: 48.903950722167245
- type: nauc_map_at_1_max
value: 4.368015121190565
- type: nauc_map_at_1_std
value: -11.682230965520118
- type: nauc_map_at_20_diff1
value: 30.85072911235718
- type: nauc_map_at_20_max
value: 20.024421045580016
- type: nauc_map_at_20_std
value: 2.4437812527877223
- type: nauc_map_at_3_diff1
value: 39.1701521223124
- type: nauc_map_at_3_max
value: 12.315298159822717
- type: nauc_map_at_3_std
value: -5.1211175310668775
- type: nauc_map_at_5_diff1
value: 38.23279649034153
- type: nauc_map_at_5_max
value: 14.562453378970972
- type: nauc_map_at_5_std
value: -3.8872952078037306
- type: nauc_mrr_at_1000_diff1
value: 25.76454031603339
- type: nauc_mrr_at_1000_max
value: 36.987486973646504
- type: nauc_mrr_at_1000_std
value: 23.993127405911782
- type: nauc_mrr_at_100_diff1
value: 25.75748809964789
- type: nauc_mrr_at_100_max
value: 37.00137109451698
- type: nauc_mrr_at_100_std
value: 24.02115415632134
- type: nauc_mrr_at_10_diff1
value: 25.859969609083706
- type: nauc_mrr_at_10_max
value: 36.94417043125623
- type: nauc_mrr_at_10_std
value: 23.69193588816108
- type: nauc_mrr_at_1_diff1
value: 25.13856497503111
- type: nauc_mrr_at_1_max
value: 33.3647833822104
- type: nauc_mrr_at_1_std
value: 21.516825179743293
- type: nauc_mrr_at_20_diff1
value: 25.642602521543896
- type: nauc_mrr_at_20_max
value: 37.00173585685738
- type: nauc_mrr_at_20_std
value: 23.948759826317996
- type: nauc_mrr_at_3_diff1
value: 24.57379470383737
- type: nauc_mrr_at_3_max
value: 35.05292254453142
- type: nauc_mrr_at_3_std
value: 22.164140056553332
- type: nauc_mrr_at_5_diff1
value: 25.945828840033407
- type: nauc_mrr_at_5_max
value: 36.28692013847132
- type: nauc_mrr_at_5_std
value: 23.029834512220173
- type: nauc_ndcg_at_1000_diff1
value: 20.296757719323153
- type: nauc_ndcg_at_1000_max
value: 37.48395095000081
- type: nauc_ndcg_at_1000_std
value: 31.427363488785897
- type: nauc_ndcg_at_100_diff1
value: 20.850922448339382
- type: nauc_ndcg_at_100_max
value: 31.994561388810332
- type: nauc_ndcg_at_100_std
value: 24.999776113877374
- type: nauc_ndcg_at_10_diff1
value: 15.294338982345188
- type: nauc_ndcg_at_10_max
value: 28.88313313311664
- type: nauc_ndcg_at_10_std
value: 20.868634992089486
- type: nauc_ndcg_at_1_diff1
value: 26.184542545764266
- type: nauc_ndcg_at_1_max
value: 33.49408854189648
- type: nauc_ndcg_at_1_std
value: 21.644457229854616
- type: nauc_ndcg_at_20_diff1
value: 15.341797014632963
- type: nauc_ndcg_at_20_max
value: 27.84956487113421
- type: nauc_ndcg_at_20_std
value: 21.97010876262456
- type: nauc_ndcg_at_3_diff1
value: 16.617546176572887
- type: nauc_ndcg_at_3_max
value: 31.0807079505372
- type: nauc_ndcg_at_3_std
value: 20.563003372087447
- type: nauc_ndcg_at_5_diff1
value: 17.141262698147518
- type: nauc_ndcg_at_5_max
value: 31.014000002769315
- type: nauc_ndcg_at_5_std
value: 21.903989918122914
- type: nauc_precision_at_1000_diff1
value: -26.736915033118148
- type: nauc_precision_at_1000_max
value: 0.41514162563304957
- type: nauc_precision_at_1000_std
value: 29.414979920206335
- type: nauc_precision_at_100_diff1
value: -22.29841081134693
- type: nauc_precision_at_100_max
value: 10.670850649163286
- type: nauc_precision_at_100_std
value: 37.030209783550625
- type: nauc_precision_at_10_diff1
value: -7.401740939284052
- type: nauc_precision_at_10_max
value: 26.372442015476512
- type: nauc_precision_at_10_std
value: 28.058522245561985
- type: nauc_precision_at_1_diff1
value: 25.992836361025546
- type: nauc_precision_at_1_max
value: 33.81712388873076
- type: nauc_precision_at_1_std
value: 22.130100241561536
- type: nauc_precision_at_20_diff1
value: -14.715825716659179
- type: nauc_precision_at_20_max
value: 21.0400050444382
- type: nauc_precision_at_20_std
value: 32.37056700564148
- type: nauc_precision_at_3_diff1
value: 5.626852329412606
- type: nauc_precision_at_3_max
value: 31.486758990096703
- type: nauc_precision_at_3_std
value: 23.372250462239542
- type: nauc_precision_at_5_diff1
value: 1.2273456188651337
- type: nauc_precision_at_5_max
value: 30.63448937975829
- type: nauc_precision_at_5_std
value: 27.319392615570614
- type: nauc_recall_at_1000_diff1
value: 7.442058577828199
- type: nauc_recall_at_1000_max
value: 17.366286208134948
- type: nauc_recall_at_1000_std
value: 16.538023469059937
- type: nauc_recall_at_100_diff1
value: 18.263940318090828
- type: nauc_recall_at_100_max
value: 18.766819889035368
- type: nauc_recall_at_100_std
value: 10.297431485613268
- type: nauc_recall_at_10_diff1
value: 30.052808504776717
- type: nauc_recall_at_10_max
value: 17.223636924464284
- type: nauc_recall_at_10_std
value: -2.8915719805312126
- type: nauc_recall_at_1_diff1
value: 48.903950722167245
- type: nauc_recall_at_1_max
value: 4.368015121190565
- type: nauc_recall_at_1_std
value: -11.682230965520118
- type: nauc_recall_at_20_diff1
value: 25.00678345922952
- type: nauc_recall_at_20_max
value: 17.734815525743993
- type: nauc_recall_at_20_std
value: 1.2937788396283523
- type: nauc_recall_at_3_diff1
value: 34.053479666933164
- type: nauc_recall_at_3_max
value: 10.356061180744728
- type: nauc_recall_at_3_std
value: -7.622782189103819
- type: nauc_recall_at_5_diff1
value: 35.282050319114994
- type: nauc_recall_at_5_max
value: 13.444414495259005
- type: nauc_recall_at_5_std
value: -6.406123122708332
- type: ndcg_at_1
value: 47.833
- type: ndcg_at_10
value: 37.545
- type: ndcg_at_100
value: 34.608
- type: ndcg_at_1000
value: 43.789
- type: ndcg_at_20
value: 34.724
- type: ndcg_at_3
value: 43.055
- type: ndcg_at_5
value: 40.595
- type: precision_at_1
value: 49.536
- type: precision_at_10
value: 27.678000000000004
- type: precision_at_100
value: 8.901
- type: precision_at_1000
value: 2.225
- type: precision_at_20
value: 20.279
- type: precision_at_3
value: 40.144000000000005
- type: precision_at_5
value: 34.675
- type: recall_at_1
value: 5.949999999999999
- type: recall_at_10
value: 18.368000000000002
- type: recall_at_100
value: 36.702
- type: recall_at_1000
value: 69.39800000000001
- type: recall_at_20
value: 22.241
- type: recall_at_3
value: 11.618
- type: recall_at_5
value: 14.338999999999999
task:
type: Retrieval
- dataset:
config: default
name: MTEB NQ (default)
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
split: test
type: mteb/nq
metrics:
- type: main_score
value: 64.693
- type: map_at_1
value: 40.119
- type: map_at_10
value: 57.008
- type: map_at_100
value: 57.769999999999996
- type: map_at_1000
value: 57.782999999999994
- type: map_at_20
value: 57.528999999999996
- type: map_at_3
value: 52.713
- type: map_at_5
value: 55.462
- type: mrr_at_1
value: 45.017381228273464
- type: mrr_at_10
value: 59.62700481892242
- type: mrr_at_100
value: 60.11977007964554
- type: mrr_at_1000
value: 60.12838314206039
- type: mrr_at_20
value: 59.96971543639854
- type: mrr_at_3
value: 56.38277327153322
- type: mrr_at_5
value: 58.559772112784756
- type: nauc_map_at_1000_diff1
value: 39.016224863361316
- type: nauc_map_at_1000_max
value: 30.677526741613914
- type: nauc_map_at_1000_std
value: -9.925306326190029
- type: nauc_map_at_100_diff1
value: 39.02038091276591
- type: nauc_map_at_100_max
value: 30.687899774856326
- type: nauc_map_at_100_std
value: -9.914518833390677
- type: nauc_map_at_10_diff1
value: 39.04523753783543
- type: nauc_map_at_10_max
value: 30.976052448225627
- type: nauc_map_at_10_std
value: -10.41607954987974
- type: nauc_map_at_1_diff1
value: 40.06219774868448
- type: nauc_map_at_1_max
value: 26.735486652072517
- type: nauc_map_at_1_std
value: -8.304382193524896
- type: nauc_map_at_20_diff1
value: 39.05577477358533
- type: nauc_map_at_20_max
value: 30.78179300885049
- type: nauc_map_at_20_std
value: -10.033002471334074
- type: nauc_map_at_3_diff1
value: 38.802559695913885
- type: nauc_map_at_3_max
value: 30.365699555342978
- type: nauc_map_at_3_std
value: -11.716942405728881
- type: nauc_map_at_5_diff1
value: 38.88593641854277
- type: nauc_map_at_5_max
value: 30.93585211223555
- type: nauc_map_at_5_std
value: -10.926633622752911
- type: nauc_mrr_at_1000_diff1
value: 39.04692080086692
- type: nauc_mrr_at_1000_max
value: 30.197468259524175
- type: nauc_mrr_at_1000_std
value: -7.818491692833017
- type: nauc_mrr_at_100_diff1
value: 39.0473015118493
- type: nauc_mrr_at_100_max
value: 30.203218891973965
- type: nauc_mrr_at_100_std
value: -7.809410627895269
- type: nauc_mrr_at_10_diff1
value: 39.022526617566456
- type: nauc_mrr_at_10_max
value: 30.41103199763037
- type: nauc_mrr_at_10_std
value: -7.986473780645788
- type: nauc_mrr_at_1_diff1
value: 40.2687402313342
- type: nauc_mrr_at_1_max
value: 26.56359606867155
- type: nauc_mrr_at_1_std
value: -6.6659359448538025
- type: nauc_mrr_at_20_diff1
value: 39.048111884686826
- type: nauc_mrr_at_20_max
value: 30.246914959156364
- type: nauc_mrr_at_20_std
value: -7.801804075454251
- type: nauc_mrr_at_3_diff1
value: 38.8647060004973
- type: nauc_mrr_at_3_max
value: 30.225427021287963
- type: nauc_mrr_at_3_std
value: -9.016676247800575
- type: nauc_mrr_at_5_diff1
value: 38.95589884289447
- type: nauc_mrr_at_5_max
value: 30.55482027762662
- type: nauc_mrr_at_5_std
value: -8.287991164740555
- type: nauc_ndcg_at_1000_diff1
value: 38.935229352725536
- type: nauc_ndcg_at_1000_max
value: 31.318278701790277
- type: nauc_ndcg_at_1000_std
value: -8.498883716013742
- type: nauc_ndcg_at_100_diff1
value: 39.00131687376651
- type: nauc_ndcg_at_100_max
value: 31.60126452179523
- type: nauc_ndcg_at_100_std
value: -8.0878761098937
- type: nauc_ndcg_at_10_diff1
value: 38.997637272745976
- type: nauc_ndcg_at_10_max
value: 32.81562205617119
- type: nauc_ndcg_at_10_std
value: -9.809117549403716
- type: nauc_ndcg_at_1_diff1
value: 40.2687402313342
- type: nauc_ndcg_at_1_max
value: 26.56359606867155
- type: nauc_ndcg_at_1_std
value: -6.6659359448538025
- type: nauc_ndcg_at_20_diff1
value: 39.05787809282005
- type: nauc_ndcg_at_20_max
value: 32.148837127474216
- type: nauc_ndcg_at_20_std
value: -8.538216720226362
- type: nauc_ndcg_at_3_diff1
value: 38.514904225460185
- type: nauc_ndcg_at_3_max
value: 31.647932572190907
- type: nauc_ndcg_at_3_std
value: -12.117323520301271
- type: nauc_ndcg_at_5_diff1
value: 38.67523620158631
- type: nauc_ndcg_at_5_max
value: 32.71111428248374
- type: nauc_ndcg_at_5_std
value: -10.830509911489106
- type: nauc_precision_at_1000_diff1
value: -10.134425320872637
- type: nauc_precision_at_1000_max
value: -7.9214866985442836
- type: nauc_precision_at_1000_std
value: 14.593125138517463
- type: nauc_precision_at_100_diff1
value: -6.427184925035445
- type: nauc_precision_at_100_max
value: -3.565171885582329
- type: nauc_precision_at_100_std
value: 15.87161403279646
- type: nauc_precision_at_10_diff1
value: 9.87974963974257
- type: nauc_precision_at_10_max
value: 14.701681974930208
- type: nauc_precision_at_10_std
value: 3.7336847482921924
- type: nauc_precision_at_1_diff1
value: 40.2687402313342
- type: nauc_precision_at_1_max
value: 26.56359606867155
- type: nauc_precision_at_1_std
value: -6.6659359448538025
- type: nauc_precision_at_20_diff1
value: 2.890969722929749
- type: nauc_precision_at_20_max
value: 6.794303444012595
- type: nauc_precision_at_20_std
value: 10.705845010583102
- type: nauc_precision_at_3_diff1
value: 25.828531407512916
- type: nauc_precision_at_3_max
value: 26.003194922700068
- type: nauc_precision_at_3_std
value: -9.365843001852745
- type: nauc_precision_at_5_diff1
value: 18.442430286590213
- type: nauc_precision_at_5_max
value: 22.17126455978319
- type: nauc_precision_at_5_std
value: -3.307912326207094
- type: nauc_recall_at_1000_diff1
value: 37.08230039820157
- type: nauc_recall_at_1000_max
value: 47.10529218716289
- type: nauc_recall_at_1000_std
value: 47.786964110589096
- type: nauc_recall_at_100_diff1
value: 41.32053677940413
- type: nauc_recall_at_100_max
value: 53.09289155866568
- type: nauc_recall_at_100_std
value: 32.47492854799267
- type: nauc_recall_at_10_diff1
value: 37.31427344851398
- type: nauc_recall_at_10_max
value: 43.0702780767873
- type: nauc_recall_at_10_std
value: -10.887409444200305
- type: nauc_recall_at_1_diff1
value: 40.06219774868448
- type: nauc_recall_at_1_max
value: 26.735486652072517
- type: nauc_recall_at_1_std
value: -8.304382193524896
- type: nauc_recall_at_20_diff1
value: 38.026247692487225
- type: nauc_recall_at_20_max
value: 43.122612480943125
- type: nauc_recall_at_20_std
value: 0.06425536869830446
- type: nauc_recall_at_3_diff1
value: 36.42120384763962
- type: nauc_recall_at_3_max
value: 34.94129978903372
- type: nauc_recall_at_3_std
value: -15.716640140198779
- type: nauc_recall_at_5_diff1
value: 36.15895636103322
- type: nauc_recall_at_5_max
value: 38.80623578799298
- type: nauc_recall_at_5_std
value: -13.51525373978869
- type: ndcg_at_1
value: 45.017
- type: ndcg_at_10
value: 64.693
- type: ndcg_at_100
value: 67.632
- type: ndcg_at_1000
value: 67.91199999999999
- type: ndcg_at_20
value: 66.277
- type: ndcg_at_3
value: 57.046
- type: ndcg_at_5
value: 61.516999999999996
- type: precision_at_1
value: 45.017
- type: precision_at_10
value: 10.18
- type: precision_at_100
value: 1.185
- type: precision_at_1000
value: 0.121
- type: precision_at_20
value: 5.479
- type: precision_at_3
value: 25.541000000000004
- type: precision_at_5
value: 17.949
- type: recall_at_1
value: 40.119
- type: recall_at_10
value: 85.139
- type: recall_at_100
value: 97.444
- type: recall_at_1000
value: 99.529
- type: recall_at_20
value: 90.88199999999999
- type: recall_at_3
value: 65.88000000000001
- type: recall_at_5
value: 76.132
task:
type: Retrieval
- dataset:
config: default
name: MTEB QuoraRetrieval (default)
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
split: test
type: mteb/quora
metrics:
- type: main_score
value: 88.773
- type: map_at_1
value: 70.96000000000001
- type: map_at_10
value: 85.174
- type: map_at_100
value: 85.804
- type: map_at_1000
value: 85.817
- type: map_at_20
value: 85.596
- type: map_at_3
value: 82.219
- type: map_at_5
value: 84.098
- type: mrr_at_1
value: 81.76
- type: mrr_at_10
value: 87.79770634920607
- type: mrr_at_100
value: 87.89799102352673
- type: mrr_at_1000
value: 87.89865476743903
- type: mrr_at_20
value: 87.87680512328197
- type: mrr_at_3
value: 86.81999999999978
- type: mrr_at_5
value: 87.51299999999969
- type: nauc_map_at_1000_diff1
value: 76.90119675123604
- type: nauc_map_at_1000_max
value: 20.079761155170527
- type: nauc_map_at_1000_std
value: -62.08844878736319
- type: nauc_map_at_100_diff1
value: 76.91315659037733
- type: nauc_map_at_100_max
value: 20.037613519830543
- type: nauc_map_at_100_std
value: -62.1809605413574
- type: nauc_map_at_10_diff1
value: 77.29490073584684
- type: nauc_map_at_10_max
value: 18.97493585375514
- type: nauc_map_at_10_std
value: -65.06133578431042
- type: nauc_map_at_1_diff1
value: 80.92204517914038
- type: nauc_map_at_1_max
value: 12.955779715044127
- type: nauc_map_at_1_std
value: -53.185870692847814
- type: nauc_map_at_20_diff1
value: 77.06595372320452
- type: nauc_map_at_20_max
value: 19.587544100405307
- type: nauc_map_at_20_std
value: -63.38932039031718
- type: nauc_map_at_3_diff1
value: 77.81358593606132
- type: nauc_map_at_3_max
value: 16.415667501797888
- type: nauc_map_at_3_std
value: -65.91817124009025
- type: nauc_map_at_5_diff1
value: 77.55572041802866
- type: nauc_map_at_5_max
value: 17.84810271472641
- type: nauc_map_at_5_std
value: -66.5202429218229
- type: nauc_mrr_at_1000_diff1
value: 77.25919152483527
- type: nauc_mrr_at_1000_max
value: 23.266505681060313
- type: nauc_mrr_at_1000_std
value: -56.997207262592106
- type: nauc_mrr_at_100_diff1
value: 77.25865200926027
- type: nauc_mrr_at_100_max
value: 23.266917952901537
- type: nauc_mrr_at_100_std
value: -56.99775622461676
- type: nauc_mrr_at_10_diff1
value: 77.27177237809222
- type: nauc_mrr_at_10_max
value: 23.234422413279194
- type: nauc_mrr_at_10_std
value: -57.287098821203166
- type: nauc_mrr_at_1_diff1
value: 77.87705968629228
- type: nauc_mrr_at_1_max
value: 23.357364820166353
- type: nauc_mrr_at_1_std
value: -52.724677718394254
- type: nauc_mrr_at_20_diff1
value: 77.26510245027495
- type: nauc_mrr_at_20_max
value: 23.250601444229872
- type: nauc_mrr_at_20_std
value: -57.073576665896155
- type: nauc_mrr_at_3_diff1
value: 77.08835110871802
- type: nauc_mrr_at_3_max
value: 23.37973990414157
- type: nauc_mrr_at_3_std
value: -57.54668286148783
- type: nauc_mrr_at_5_diff1
value: 77.22940631493309
- type: nauc_mrr_at_5_max
value: 23.256197542861436
- type: nauc_mrr_at_5_std
value: -57.710428425249404
- type: nauc_ndcg_at_1000_diff1
value: 76.67905982606639
- type: nauc_ndcg_at_1000_max
value: 21.96802435224643
- type: nauc_ndcg_at_1000_std
value: -59.660695538408405
- type: nauc_ndcg_at_100_diff1
value: 76.72641745578917
- type: nauc_ndcg_at_100_max
value: 21.752538833557992
- type: nauc_ndcg_at_100_std
value: -60.14387533073589
- type: nauc_ndcg_at_10_diff1
value: 77.1697583832975
- type: nauc_ndcg_at_10_max
value: 19.90438134636175
- type: nauc_ndcg_at_10_std
value: -65.62207352990609
- type: nauc_ndcg_at_1_diff1
value: 77.87705968629228
- type: nauc_ndcg_at_1_max
value: 23.357364820166353
- type: nauc_ndcg_at_1_std
value: -52.724677718394254
- type: nauc_ndcg_at_20_diff1
value: 76.98061052184806
- type: nauc_ndcg_at_20_max
value: 20.514885434747328
- type: nauc_ndcg_at_20_std
value: -63.237149791291415
- type: nauc_ndcg_at_3_diff1
value: 76.32552624964065
- type: nauc_ndcg_at_3_max
value: 19.923840442393544
- type: nauc_ndcg_at_3_std
value: -63.588842129524245
- type: nauc_ndcg_at_5_diff1
value: 76.9533163521833
- type: nauc_ndcg_at_5_max
value: 19.51602820692585
- type: nauc_ndcg_at_5_std
value: -66.23316232094454
- type: nauc_precision_at_1000_diff1
value: -45.73706664910733
- type: nauc_precision_at_1000_max
value: 7.913436156563994
- type: nauc_precision_at_1000_std
value: 53.06948338411226
- type: nauc_precision_at_100_diff1
value: -45.31368947263091
- type: nauc_precision_at_100_max
value: 7.188900869218029
- type: nauc_precision_at_100_std
value: 50.86284056359611
- type: nauc_precision_at_10_diff1
value: -39.50336694936736
- type: nauc_precision_at_10_max
value: 6.702378324096768
- type: nauc_precision_at_10_std
value: 31.03637763595204
- type: nauc_precision_at_1_diff1
value: 77.87705968629228
- type: nauc_precision_at_1_max
value: 23.357364820166353
- type: nauc_precision_at_1_std
value: -52.724677718394254
- type: nauc_precision_at_20_diff1
value: -43.09729408672091
- type: nauc_precision_at_20_max
value: 6.532907159014953
- type: nauc_precision_at_20_std
value: 40.98770041852758
- type: nauc_precision_at_3_diff1
value: -19.675745078316503
- type: nauc_precision_at_3_max
value: 9.254372245883973
- type: nauc_precision_at_3_std
value: 3.557752877438361
- type: nauc_precision_at_5_diff1
value: -32.17451238619065
- type: nauc_precision_at_5_max
value: 7.457382998315637
- type: nauc_precision_at_5_std
value: 17.684523480181884
- type: nauc_recall_at_1000_diff1
value: 35.54833030189762
- type: nauc_recall_at_1000_max
value: -113.13072963237435
- type: nauc_recall_at_1000_std
value: -45.37230224613866
- type: nauc_recall_at_100_diff1
value: 74.70783770156788
- type: nauc_recall_at_100_max
value: 5.165483155761366
- type: nauc_recall_at_100_std
value: -98.18356589742223
- type: nauc_recall_at_10_diff1
value: 76.44831137766471
- type: nauc_recall_at_10_max
value: 6.645874880559598
- type: nauc_recall_at_10_std
value: -104.42733750490795
- type: nauc_recall_at_1_diff1
value: 80.92204517914038
- type: nauc_recall_at_1_max
value: 12.955779715044127
- type: nauc_recall_at_1_std
value: -53.185870692847814
- type: nauc_recall_at_20_diff1
value: 76.9330017100496
- type: nauc_recall_at_20_max
value: 1.3282965733900722
- type: nauc_recall_at_20_std
value: -110.44267520170585
- type: nauc_recall_at_3_diff1
value: 74.75571112449231
- type: nauc_recall_at_3_max
value: 11.392712834655518
- type: nauc_recall_at_3_std
value: -77.70319541112546
- type: nauc_recall_at_5_diff1
value: 74.44393885573719
- type: nauc_recall_at_5_max
value: 9.071230160466847
- type: nauc_recall_at_5_std
value: -90.6015799064062
- type: ndcg_at_1
value: 81.76
- type: ndcg_at_10
value: 88.773
- type: ndcg_at_100
value: 89.93100000000001
- type: ndcg_at_1000
value: 90.005
- type: ndcg_at_20
value: 89.436
- type: ndcg_at_3
value: 85.997
- type: ndcg_at_5
value: 87.571
- type: precision_at_1
value: 81.76
- type: precision_at_10
value: 13.542000000000002
- type: precision_at_100
value: 1.538
- type: precision_at_1000
value: 0.157
- type: precision_at_20
value: 7.184
- type: precision_at_3
value: 37.8
- type: precision_at_5
value: 24.898
- type: recall_at_1
value: 70.96000000000001
- type: recall_at_10
value: 95.741
- type: recall_at_100
value: 99.685
- type: recall_at_1000
value: 99.995
- type: recall_at_20
value: 97.909
- type: recall_at_3
value: 87.739
- type: recall_at_5
value: 92.203
task:
type: Retrieval
- dataset:
config: default
name: MTEB RedditClustering (default)
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
split: test
type: mteb/reddit-clustering
metrics:
- type: main_score
value: 65.91810902432418
- type: v_measure
value: 65.91810902432418
- type: v_measure_std
value: 3.988775454635202
task:
type: Clustering
- dataset:
config: default
name: MTEB RedditClusteringP2P (default)
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
split: test
type: mteb/reddit-clustering-p2p
metrics:
- type: main_score
value: 68.02321609158898
- type: v_measure
value: 68.02321609158898
- type: v_measure_std
value: 13.048787017567099
task:
type: Clustering
- dataset:
config: default
name: MTEB SCIDOCS (default)
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
split: test
type: mteb/scidocs
metrics:
- type: main_score
value: 23.814
- type: map_at_1
value: 5.455
- type: map_at_10
value: 14.208000000000002
- type: map_at_100
value: 17.328
- type: map_at_1000
value: 17.748
- type: map_at_20
value: 15.735
- type: map_at_3
value: 9.614
- type: map_at_5
value: 11.777999999999999
- type: mrr_at_1
value: 26.900000000000002
- type: mrr_at_10
value: 38.7683333333333
- type: mrr_at_100
value: 39.86887355087612
- type: mrr_at_1000
value: 39.89416581484622
- type: mrr_at_20
value: 39.45076687545336
- type: mrr_at_3
value: 34.283333333333324
- type: mrr_at_5
value: 36.7733333333333
- type: nauc_map_at_1000_diff1
value: 6.582032797360866
- type: nauc_map_at_1000_max
value: 17.29971642208067
- type: nauc_map_at_1000_std
value: 6.571653079053965
- type: nauc_map_at_100_diff1
value: 6.6520274055220945
- type: nauc_map_at_100_max
value: 17.28927582943446
- type: nauc_map_at_100_std
value: 6.3788070086997735
- type: nauc_map_at_10_diff1
value: 7.8097868021789765
- type: nauc_map_at_10_max
value: 15.868814598414307
- type: nauc_map_at_10_std
value: 1.3833485160000003
- type: nauc_map_at_1_diff1
value: 20.002393048021077
- type: nauc_map_at_1_max
value: 16.777673629413144
- type: nauc_map_at_1_std
value: -1.5982142140773345
- type: nauc_map_at_20_diff1
value: 7.026484961291383
- type: nauc_map_at_20_max
value: 16.358039615308098
- type: nauc_map_at_20_std
value: 4.265555678748822
- type: nauc_map_at_3_diff1
value: 11.670235117521639
- type: nauc_map_at_3_max
value: 15.421371305785032
- type: nauc_map_at_3_std
value: -2.0891385987905253
- type: nauc_map_at_5_diff1
value: 8.782941099433515
- type: nauc_map_at_5_max
value: 15.429505319062791
- type: nauc_map_at_5_std
value: 0.01706038881959217
- type: nauc_mrr_at_1000_diff1
value: 14.424089575104654
- type: nauc_mrr_at_1000_max
value: 18.354632635310146
- type: nauc_mrr_at_1000_std
value: 3.148669746271006
- type: nauc_mrr_at_100_diff1
value: 14.43190469520255
- type: nauc_mrr_at_100_max
value: 18.37445314994635
- type: nauc_mrr_at_100_std
value: 3.175095104402658
- type: nauc_mrr_at_10_diff1
value: 14.015953357582356
- type: nauc_mrr_at_10_max
value: 18.334773185007375
- type: nauc_mrr_at_10_std
value: 3.1788218175601917
- type: nauc_mrr_at_1_diff1
value: 20.06438180516676
- type: nauc_mrr_at_1_max
value: 16.906770193671957
- type: nauc_mrr_at_1_std
value: -1.591329233808127
- type: nauc_mrr_at_20_diff1
value: 14.126339493553159
- type: nauc_mrr_at_20_max
value: 18.316449447055653
- type: nauc_mrr_at_20_std
value: 3.1850941428621042
- type: nauc_mrr_at_3_diff1
value: 14.730386161975737
- type: nauc_mrr_at_3_max
value: 17.32498171231654
- type: nauc_mrr_at_3_std
value: 1.321654906709584
- type: nauc_mrr_at_5_diff1
value: 14.46476336413886
- type: nauc_mrr_at_5_max
value: 17.940958841978826
- type: nauc_mrr_at_5_std
value: 2.9529508335708945
- type: nauc_ndcg_at_1000_diff1
value: 6.681346718194129
- type: nauc_ndcg_at_1000_max
value: 21.404613477283746
- type: nauc_ndcg_at_1000_std
value: 14.596655479547055
- type: nauc_ndcg_at_100_diff1
value: 6.3302594607492155
- type: nauc_ndcg_at_100_max
value: 21.26459769654865
- type: nauc_ndcg_at_100_std
value: 14.522962033467959
- type: nauc_ndcg_at_10_diff1
value: 7.025732359853311
- type: nauc_ndcg_at_10_max
value: 17.31881906701822
- type: nauc_ndcg_at_10_std
value: 4.692540938431521
- type: nauc_ndcg_at_1_diff1
value: 20.06438180516676
- type: nauc_ndcg_at_1_max
value: 16.906770193671957
- type: nauc_ndcg_at_1_std
value: -1.591329233808127
- type: nauc_ndcg_at_20_diff1
value: 6.355140893975436
- type: nauc_ndcg_at_20_max
value: 18.29467935307024
- type: nauc_ndcg_at_20_std
value: 8.87309764856374
- type: nauc_ndcg_at_3_diff1
value: 11.131091987737578
- type: nauc_ndcg_at_3_max
value: 15.876946297140213
- type: nauc_ndcg_at_3_std
value: -0.19961674229045062
- type: nauc_ndcg_at_5_diff1
value: 8.719384001108486
- type: nauc_ndcg_at_5_max
value: 16.561854761839523
- type: nauc_ndcg_at_5_std
value: 2.849455858958004
- type: nauc_precision_at_1000_diff1
value: -3.264266561841031
- type: nauc_precision_at_1000_max
value: 27.054907731659355
- type: nauc_precision_at_1000_std
value: 42.6582722652614
- type: nauc_precision_at_100_diff1
value: -1.4147583046219077
- type: nauc_precision_at_100_max
value: 22.691769918104637
- type: nauc_precision_at_100_std
value: 30.417860777083998
- type: nauc_precision_at_10_diff1
value: 0.7460714765387558
- type: nauc_precision_at_10_max
value: 16.189155199570223
- type: nauc_precision_at_10_std
value: 8.466856326540606
- type: nauc_precision_at_1_diff1
value: 20.06438180516676
- type: nauc_precision_at_1_max
value: 16.906770193671957
- type: nauc_precision_at_1_std
value: -1.591329233808127
- type: nauc_precision_at_20_diff1
value: -0.29107581757496714
- type: nauc_precision_at_20_max
value: 17.13909220544385
- type: nauc_precision_at_20_std
value: 16.413326815174717
- type: nauc_precision_at_3_diff1
value: 7.101179998696147
- type: nauc_precision_at_3_max
value: 14.797248842818975
- type: nauc_precision_at_3_std
value: 0.40582828085273265
- type: nauc_precision_at_5_diff1
value: 3.4483179666389696
- type: nauc_precision_at_5_max
value: 15.735507259648934
- type: nauc_precision_at_5_std
value: 5.671451893149887
- type: nauc_recall_at_1000_diff1
value: -3.8075718189695547
- type: nauc_recall_at_1000_max
value: 27.218180153734124
- type: nauc_recall_at_1000_std
value: 44.46679820329153
- type: nauc_recall_at_100_diff1
value: -1.4536649156519559
- type: nauc_recall_at_100_max
value: 22.44690502045992
- type: nauc_recall_at_100_std
value: 30.235557227945275
- type: nauc_recall_at_10_diff1
value: 0.6119379049099861
- type: nauc_recall_at_10_max
value: 15.882135185205446
- type: nauc_recall_at_10_std
value: 8.176733663905573
- type: nauc_recall_at_1_diff1
value: 20.002393048021077
- type: nauc_recall_at_1_max
value: 16.777673629413144
- type: nauc_recall_at_1_std
value: -1.5982142140773345
- type: nauc_recall_at_20_diff1
value: -0.1682800060016626
- type: nauc_recall_at_20_max
value: 16.971491120013564
- type: nauc_recall_at_20_std
value: 16.122046383351293
- type: nauc_recall_at_3_diff1
value: 6.988663029514718
- type: nauc_recall_at_3_max
value: 14.528152900658856
- type: nauc_recall_at_3_std
value: 0.17590933968510467
- type: nauc_recall_at_5_diff1
value: 3.353041984845736
- type: nauc_recall_at_5_max
value: 15.403568054057326
- type: nauc_recall_at_5_std
value: 5.319244399661828
- type: ndcg_at_1
value: 26.900000000000002
- type: ndcg_at_10
value: 23.814
- type: ndcg_at_100
value: 34.943999999999996
- type: ndcg_at_1000
value: 40.78
- type: ndcg_at_20
value: 27.643
- type: ndcg_at_3
value: 21.227
- type: ndcg_at_5
value: 19.038
- type: precision_at_1
value: 26.900000000000002
- type: precision_at_10
value: 12.73
- type: precision_at_100
value: 2.881
- type: precision_at_1000
value: 0.426
- type: precision_at_20
value: 8.57
- type: precision_at_3
value: 19.6
- type: precision_at_5
value: 16.8
- type: recall_at_1
value: 5.455
- type: recall_at_10
value: 25.802999999999997
- type: recall_at_100
value: 58.45
- type: recall_at_1000
value: 86.457
- type: recall_at_20
value: 34.762
- type: recall_at_3
value: 11.943
- type: recall_at_5
value: 17.043
task:
type: Retrieval
- dataset:
config: default
name: MTEB SICK-R (default)
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
split: test
type: mteb/sickr-sts
metrics:
- type: cosine_pearson
value: 85.60157402941752
- type: cosine_spearman
value: 82.98956725441452
- type: euclidean_pearson
value: 83.07824357271161
- type: euclidean_spearman
value: 82.98957395335212
- type: main_score
value: 82.98956725441452
- type: manhattan_pearson
value: 83.10748351148622
- type: manhattan_spearman
value: 83.16217281563378
- type: pearson
value: 85.60157402941752
- type: spearman
value: 82.98956725441452
task:
type: STS
- dataset:
config: default
name: MTEB STS12 (default)
revision: a0d554a64d88156834ff5ae9920b964011b16384
split: test
type: mteb/sts12-sts
metrics:
- type: cosine_pearson
value: 85.20198919395854
- type: cosine_spearman
value: 78.17308450713497
- type: euclidean_pearson
value: 82.91465813078975
- type: euclidean_spearman
value: 78.17308450713497
- type: main_score
value: 78.17308450713497
- type: manhattan_pearson
value: 83.36938760055344
- type: manhattan_spearman
value: 78.77166023561925
- type: pearson
value: 85.20198919395854
- type: spearman
value: 78.17308450713497
task:
type: STS
- dataset:
config: default
name: MTEB STS13 (default)
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
split: test
type: mteb/sts13-sts
metrics:
- type: cosine_pearson
value: 87.3197290035165
- type: cosine_spearman
value: 88.12589189918039
- type: euclidean_pearson
value: 87.88474436451652
- type: euclidean_spearman
value: 88.12589189918039
- type: main_score
value: 88.12589189918039
- type: manhattan_pearson
value: 88.1114243109502
- type: manhattan_spearman
value: 88.40111910955112
- type: pearson
value: 87.3197290035165
- type: spearman
value: 88.12589189918039
task:
type: STS
- dataset:
config: default
name: MTEB STS15 (default)
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
split: test
type: mteb/sts15-sts
metrics:
- type: cosine_pearson
value: 87.91424745154934
- type: cosine_spearman
value: 88.78510857775494
- type: euclidean_pearson
value: 88.60854825357943
- type: euclidean_spearman
value: 88.78511307332248
- type: main_score
value: 88.78510857775494
- type: manhattan_pearson
value: 88.81490531409946
- type: manhattan_spearman
value: 89.10162579991359
- type: pearson
value: 87.91424745154934
- type: spearman
value: 88.78510857775494
task:
type: STS
- dataset:
config: default
name: MTEB STS16 (default)
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
split: test
type: mteb/sts16-sts
metrics:
- type: cosine_pearson
value: 84.42255273136605
- type: cosine_spearman
value: 86.46810322536955
- type: euclidean_pearson
value: 86.255541184091
- type: euclidean_spearman
value: 86.46810322536955
- type: main_score
value: 86.46810322536955
- type: manhattan_pearson
value: 86.72678851651064
- type: manhattan_spearman
value: 86.93777990302539
- type: pearson
value: 84.42255273136605
- type: spearman
value: 86.46810322536955
task:
type: STS
- dataset:
config: en-en
name: MTEB STS17 (en-en)
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
split: test
type: mteb/sts17-crosslingual-sts
metrics:
- type: cosine_pearson
value: 91.72746389892356
- type: cosine_spearman
value: 92.23283881812245
- type: euclidean_pearson
value: 92.29179177488737
- type: euclidean_spearman
value: 92.23283881812245
- type: main_score
value: 92.23283881812245
- type: manhattan_pearson
value: 92.13764526009247
- type: manhattan_spearman
value: 92.0582843442798
- type: pearson
value: 91.72746389892356
- type: spearman
value: 92.23283881812245
task:
type: STS
- dataset:
config: default
name: MTEB STSBenchmark (default)
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
split: test
type: mteb/stsbenchmark-sts
metrics:
- type: cosine_pearson
value: 86.14912927994007
- type: cosine_spearman
value: 87.46655844472012
- type: euclidean_pearson
value: 87.53026653408118
- type: euclidean_spearman
value: 87.46655844472012
- type: main_score
value: 87.46655844472012
- type: manhattan_pearson
value: 87.68289898403299
- type: manhattan_spearman
value: 87.73630507998439
- type: pearson
value: 86.14912927994007
- type: spearman
value: 87.46655844472012
task:
type: STS
- dataset:
config: default
name: MTEB SciDocsRR (default)
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
split: test
type: mteb/scidocs-reranking
metrics:
- type: main_score
value: 86.97859154411299
- type: map
value: 86.97859154411299
- type: mrr
value: 96.35598968932302
- type: nAUC_map_diff1
value: -18.506120190268017
- type: nAUC_map_max
value: 55.78442121746724
- type: nAUC_map_std
value: 66.27889919160313
- type: nAUC_mrr_diff1
value: 18.288014199762895
- type: nAUC_mrr_max
value: 83.25297655347828
- type: nAUC_mrr_std
value: 72.809885375971
task:
type: Reranking
- dataset:
config: default
name: MTEB SciFact (default)
revision: 0228b52cf27578f30900b9e5271d331663a030d7
split: test
type: mteb/scifact
metrics:
- type: main_score
value: 52.842
- type: map_at_1
value: 32.911
- type: map_at_10
value: 46.013
- type: map_at_100
value: 47.11
- type: map_at_1000
value: 47.137
- type: map_at_20
value: 46.78
- type: map_at_3
value: 41.900999999999996
- type: map_at_5
value: 44.357
- type: mrr_at_1
value: 35.0
- type: mrr_at_10
value: 46.96574074074072
- type: mrr_at_100
value: 47.959931245967184
- type: mrr_at_1000
value: 47.98510849619688
- type: mrr_at_20
value: 47.68440206880607
- type: mrr_at_3
value: 43.77777777777776
- type: mrr_at_5
value: 45.611111111111086
- type: nauc_map_at_1000_diff1
value: 42.89180178126247
- type: nauc_map_at_1000_max
value: 45.75105611403444
- type: nauc_map_at_1000_std
value: 17.463513608950578
- type: nauc_map_at_100_diff1
value: 42.893512582653656
- type: nauc_map_at_100_max
value: 45.754617699990035
- type: nauc_map_at_100_std
value: 17.490513656867037
- type: nauc_map_at_10_diff1
value: 42.364748689290415
- type: nauc_map_at_10_max
value: 45.56642444523947
- type: nauc_map_at_10_std
value: 17.079579644716894
- type: nauc_map_at_1_diff1
value: 48.949793800671124
- type: nauc_map_at_1_max
value: 45.82239538118238
- type: nauc_map_at_1_std
value: 11.183927196674755
- type: nauc_map_at_20_diff1
value: 42.67947282270775
- type: nauc_map_at_20_max
value: 45.62274524098362
- type: nauc_map_at_20_std
value: 17.51316198529124
- type: nauc_map_at_3_diff1
value: 43.238404886755745
- type: nauc_map_at_3_max
value: 43.350130089078895
- type: nauc_map_at_3_std
value: 14.13657834477199
- type: nauc_map_at_5_diff1
value: 42.54474356788842
- type: nauc_map_at_5_max
value: 44.75146781225222
- type: nauc_map_at_5_std
value: 16.15648396925114
- type: nauc_mrr_at_1000_diff1
value: 43.556859926201554
- type: nauc_mrr_at_1000_max
value: 47.140291020802906
- type: nauc_mrr_at_1000_std
value: 18.805424261346374
- type: nauc_mrr_at_100_diff1
value: 43.55633267437543
- type: nauc_mrr_at_100_max
value: 47.14214569591525
- type: nauc_mrr_at_100_std
value: 18.828541893531277
- type: nauc_mrr_at_10_diff1
value: 43.07000882702881
- type: nauc_mrr_at_10_max
value: 47.10398430807609
- type: nauc_mrr_at_10_std
value: 18.672657418468155
- type: nauc_mrr_at_1_diff1
value: 50.71044015206451
- type: nauc_mrr_at_1_max
value: 50.31094117388535
- type: nauc_mrr_at_1_std
value: 16.308699760476404
- type: nauc_mrr_at_20_diff1
value: 43.34419341411509
- type: nauc_mrr_at_20_max
value: 47.127839363881634
- type: nauc_mrr_at_20_std
value: 18.93672383999524
- type: nauc_mrr_at_3_diff1
value: 44.09886232125989
- type: nauc_mrr_at_3_max
value: 47.35761798607356
- type: nauc_mrr_at_3_std
value: 18.66293179466984
- type: nauc_mrr_at_5_diff1
value: 43.455234122310486
- type: nauc_mrr_at_5_max
value: 46.95579311628989
- type: nauc_mrr_at_5_std
value: 18.637801785868913
- type: nauc_ndcg_at_1000_diff1
value: 42.09778197382488
- type: nauc_ndcg_at_1000_max
value: 46.41254633930011
- type: nauc_ndcg_at_1000_std
value: 19.727442899891408
- type: nauc_ndcg_at_100_diff1
value: 42.127587196947616
- type: nauc_ndcg_at_100_max
value: 46.56257426488274
- type: nauc_ndcg_at_100_std
value: 20.848893214507893
- type: nauc_ndcg_at_10_diff1
value: 39.520585737534184
- type: nauc_ndcg_at_10_max
value: 45.58832499779741
- type: nauc_ndcg_at_10_std
value: 19.230954524847657
- type: nauc_ndcg_at_1_diff1
value: 50.71044015206451
- type: nauc_ndcg_at_1_max
value: 50.31094117388535
- type: nauc_ndcg_at_1_std
value: 16.308699760476404
- type: nauc_ndcg_at_20_diff1
value: 40.57140695180754
- type: nauc_ndcg_at_20_max
value: 45.78884507871275
- type: nauc_ndcg_at_20_std
value: 20.87311919719877
- type: nauc_ndcg_at_3_diff1
value: 42.23214214323953
- type: nauc_ndcg_at_3_max
value: 44.25227959403861
- type: nauc_ndcg_at_3_std
value: 16.808716032720582
- type: nauc_ndcg_at_5_diff1
value: 40.32970262607426
- type: nauc_ndcg_at_5_max
value: 44.170446333441234
- type: nauc_ndcg_at_5_std
value: 17.670796157538952
- type: nauc_precision_at_1000_diff1
value: 4.4855757822300575
- type: nauc_precision_at_1000_max
value: 40.96816841248859
- type: nauc_precision_at_1000_std
value: 52.76450049154224
- type: nauc_precision_at_100_diff1
value: 13.467456291972423
- type: nauc_precision_at_100_max
value: 46.07633674307899
- type: nauc_precision_at_100_std
value: 58.38655747924394
- type: nauc_precision_at_10_diff1
value: 18.885447707274754
- type: nauc_precision_at_10_max
value: 47.475287933169
- type: nauc_precision_at_10_std
value: 40.78242836332111
- type: nauc_precision_at_1_diff1
value: 50.71044015206451
- type: nauc_precision_at_1_max
value: 50.31094117388535
- type: nauc_precision_at_1_std
value: 16.308699760476404
- type: nauc_precision_at_20_diff1
value: 15.953924273102402
- type: nauc_precision_at_20_max
value: 45.47509365077202
- type: nauc_precision_at_20_std
value: 51.47100789520174
- type: nauc_precision_at_3_diff1
value: 34.84717380734587
- type: nauc_precision_at_3_max
value: 45.610933933265756
- type: nauc_precision_at_3_std
value: 27.734101378690852
- type: nauc_precision_at_5_diff1
value: 26.59896898222078
- type: nauc_precision_at_5_max
value: 46.140890589971264
- type: nauc_precision_at_5_std
value: 33.56649457748371
- type: nauc_recall_at_1000_diff1
value: 86.92810457516407
- type: nauc_recall_at_1000_max
value: 100.0
- type: nauc_recall_at_1000_std
value: 100.0
- type: nauc_recall_at_100_diff1
value: 43.86702049240759
- type: nauc_recall_at_100_max
value: 53.33308762101326
- type: nauc_recall_at_100_std
value: 63.09523809523798
- type: nauc_recall_at_10_diff1
value: 25.88560487444265
- type: nauc_recall_at_10_max
value: 41.6157709657381
- type: nauc_recall_at_10_std
value: 24.04962076662668
- type: nauc_recall_at_1_diff1
value: 48.949793800671124
- type: nauc_recall_at_1_max
value: 45.82239538118238
- type: nauc_recall_at_1_std
value: 11.183927196674755
- type: nauc_recall_at_20_diff1
value: 27.507691414639822
- type: nauc_recall_at_20_max
value: 41.70246318763185
- type: nauc_recall_at_20_std
value: 37.33722257696256
- type: nauc_recall_at_3_diff1
value: 35.956192998402784
- type: nauc_recall_at_3_max
value: 38.74690791289058
- type: nauc_recall_at_3_std
value: 15.683526476441553
- type: nauc_recall_at_5_diff1
value: 31.03358342668625
- type: nauc_recall_at_5_max
value: 37.820450291250786
- type: nauc_recall_at_5_std
value: 18.52848795003198
- type: ndcg_at_1
value: 35.0
- type: ndcg_at_10
value: 52.842
- type: ndcg_at_100
value: 57.513999999999996
- type: ndcg_at_1000
value: 58.272999999999996
- type: ndcg_at_20
value: 55.454
- type: ndcg_at_3
value: 45.452
- type: ndcg_at_5
value: 49.169000000000004
- type: precision_at_1
value: 35.0
- type: precision_at_10
value: 8.366999999999999
- type: precision_at_100
value: 1.0630000000000002
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_20
value: 4.75
- type: precision_at_3
value: 19.333
- type: precision_at_5
value: 14.066999999999998
- type: recall_at_1
value: 32.911
- type: recall_at_10
value: 73.033
- type: recall_at_100
value: 93.667
- type: recall_at_1000
value: 99.667
- type: recall_at_20
value: 83.0
- type: recall_at_3
value: 52.878
- type: recall_at_5
value: 62.06700000000001
task:
type: Retrieval
- dataset:
config: default
name: MTEB SprintDuplicateQuestions (default)
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
split: test
type: mteb/sprintduplicatequestions-pairclassification
metrics:
- type: cosine_accuracy
value: 99.87425742574257
- type: cosine_accuracy_threshold
value: 85.4932188987732
- type: cosine_ap
value: 97.03588351132844
- type: cosine_f1
value: 93.60201511335012
- type: cosine_f1_threshold
value: 85.4932188987732
- type: cosine_precision
value: 94.31472081218274
- type: cosine_recall
value: 92.9
- type: dot_accuracy
value: 99.87425742574257
- type: dot_accuracy_threshold
value: 85.4932188987732
- type: dot_ap
value: 97.03588351132846
- type: dot_f1
value: 93.60201511335012
- type: dot_f1_threshold
value: 85.4932188987732
- type: dot_precision
value: 94.31472081218274
- type: dot_recall
value: 92.9
- type: euclidean_accuracy
value: 99.87425742574257
- type: euclidean_accuracy_threshold
value: 53.864240646362305
- type: euclidean_ap
value: 97.03588351132844
- type: euclidean_f1
value: 93.60201511335012
- type: euclidean_f1_threshold
value: 53.864240646362305
- type: euclidean_precision
value: 94.31472081218274
- type: euclidean_recall
value: 92.9
- type: main_score
value: 97.12020380643673
- type: manhattan_accuracy
value: 99.87821782178217
- type: manhattan_accuracy_threshold
value: 2557.1868896484375
- type: manhattan_ap
value: 97.12020380643673
- type: manhattan_f1
value: 93.83458646616543
- type: manhattan_f1_threshold
value: 2559.8316192626953
- type: manhattan_precision
value: 94.07035175879398
- type: manhattan_recall
value: 93.60000000000001
- type: max_accuracy
value: 99.87821782178217
- type: max_ap
value: 97.12020380643673
- type: max_f1
value: 93.83458646616543
- type: max_precision
value: 94.31472081218274
- type: max_recall
value: 93.60000000000001
- type: similarity_accuracy
value: 99.87425742574257
- type: similarity_accuracy_threshold
value: 85.4932188987732
- type: similarity_ap
value: 97.03588351132844
- type: similarity_f1
value: 93.60201511335012
- type: similarity_f1_threshold
value: 85.4932188987732
- type: similarity_precision
value: 94.31472081218274
- type: similarity_recall
value: 92.9
task:
type: PairClassification
- dataset:
config: default
name: MTEB StackExchangeClustering (default)
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
split: test
type: mteb/stackexchange-clustering
metrics:
- type: main_score
value: 76.98818225336838
- type: v_measure
value: 76.98818225336838
- type: v_measure_std
value: 3.154967965946174
task:
type: Clustering
- dataset:
config: default
name: MTEB StackExchangeClusteringP2P (default)
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
split: test
type: mteb/stackexchange-clustering-p2p
metrics:
- type: main_score
value: 45.163651140607605
- type: v_measure
value: 45.163651140607605
- type: v_measure_std
value: 1.4322970276083837
task:
type: Clustering
- dataset:
config: default
name: MTEB StackOverflowDupQuestions (default)
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
split: test
type: mteb/stackoverflowdupquestions-reranking
metrics:
- type: main_score
value: 56.391883714372696
- type: map
value: 56.391883714372696
- type: mrr
value: 57.349492827434
- type: nAUC_map_diff1
value: 39.157250127064955
- type: nAUC_map_max
value: 18.467392575309553
- type: nAUC_map_std
value: 6.562904741623687
- type: nAUC_mrr_diff1
value: 39.2616391317946
- type: nAUC_mrr_max
value: 20.17824080849778
- type: nAUC_mrr_std
value: 7.3151994802766005
task:
type: Reranking
- dataset:
config: default
name: MTEB SummEval (default)
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
split: test
type: mteb/summeval
metrics:
- type: cosine_pearson
value: 31.115370364087013
- type: cosine_spearman
value: 30.168250595399797
- type: dot_pearson
value: 31.11537534713581
- type: dot_spearman
value: 30.168250595399797
- type: main_score
value: 30.168250595399797
- type: pearson
value: 31.115370364087013
- type: spearman
value: 30.168250595399797
task:
type: Summarization
- dataset:
config: default
name: MTEB TRECCOVID (default)
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
split: test
type: mteb/trec-covid
metrics:
- type: main_score
value: 58.492
- type: map_at_1
value: 0.20600000000000002
- type: map_at_10
value: 1.355
- type: map_at_100
value: 7.682
- type: map_at_1000
value: 19.422
- type: map_at_20
value: 2.307
- type: map_at_3
value: 0.504
- type: map_at_5
value: 0.756
- type: mrr_at_1
value: 76.0
- type: mrr_at_10
value: 83.07460317460317
- type: mrr_at_100
value: 83.34653299916457
- type: mrr_at_1000
value: 83.34653299916457
- type: mrr_at_20
value: 83.34653299916457
- type: mrr_at_3
value: 81.66666666666666
- type: mrr_at_5
value: 82.56666666666666
- type: nauc_map_at_1000_diff1
value: -0.9517122101342602
- type: nauc_map_at_1000_max
value: 35.489825727736665
- type: nauc_map_at_1000_std
value: 72.31927320292716
- type: nauc_map_at_100_diff1
value: -2.6696855309157197
- type: nauc_map_at_100_max
value: 16.881012905948
- type: nauc_map_at_100_std
value: 60.636797544764796
- type: nauc_map_at_10_diff1
value: 3.3220618387062166
- type: nauc_map_at_10_max
value: 7.9728051776655136
- type: nauc_map_at_10_std
value: 37.001872811447676
- type: nauc_map_at_1_diff1
value: 19.385947791364455
- type: nauc_map_at_1_max
value: -2.017784609408856
- type: nauc_map_at_1_std
value: 15.846915472515105
- type: nauc_map_at_20_diff1
value: 1.0613460412567055
- type: nauc_map_at_20_max
value: 7.639419874542262
- type: nauc_map_at_20_std
value: 42.004875229740826
- type: nauc_map_at_3_diff1
value: 7.0015165243253366
- type: nauc_map_at_3_max
value: 7.084211457521959
- type: nauc_map_at_3_std
value: 24.788352390570584
- type: nauc_map_at_5_diff1
value: 6.657899114095232
- type: nauc_map_at_5_max
value: 4.976947597730104
- type: nauc_map_at_5_std
value: 29.481454683941184
- type: nauc_mrr_at_1000_diff1
value: 14.561577730498792
- type: nauc_mrr_at_1000_max
value: 57.72810732532122
- type: nauc_mrr_at_1000_std
value: 66.88388647529588
- type: nauc_mrr_at_100_diff1
value: 14.561577730498792
- type: nauc_mrr_at_100_max
value: 57.72810732532122
- type: nauc_mrr_at_100_std
value: 66.88388647529588
- type: nauc_mrr_at_10_diff1
value: 14.57469254485188
- type: nauc_mrr_at_10_max
value: 58.079825098428714
- type: nauc_mrr_at_10_std
value: 67.32128458796227
- type: nauc_mrr_at_1_diff1
value: 25.34827377347056
- type: nauc_mrr_at_1_max
value: 50.58838798996285
- type: nauc_mrr_at_1_std
value: 59.36661763433414
- type: nauc_mrr_at_20_diff1
value: 14.561577730498792
- type: nauc_mrr_at_20_max
value: 57.72810732532122
- type: nauc_mrr_at_20_std
value: 66.88388647529588
- type: nauc_mrr_at_3_diff1
value: 9.063532868160214
- type: nauc_mrr_at_3_max
value: 58.71832537642312
- type: nauc_mrr_at_3_std
value: 69.07730444362834
- type: nauc_mrr_at_5_diff1
value: 13.555968426927894
- type: nauc_mrr_at_5_max
value: 59.22085120600723
- type: nauc_mrr_at_5_std
value: 67.47575721875769
- type: nauc_ndcg_at_1000_diff1
value: -1.8751322983265282
- type: nauc_ndcg_at_1000_max
value: 38.78712823179003
- type: nauc_ndcg_at_1000_std
value: 70.43132053994896
- type: nauc_ndcg_at_100_diff1
value: -10.220936212671377
- type: nauc_ndcg_at_100_max
value: 47.70220514113511
- type: nauc_ndcg_at_100_std
value: 75.65229647100806
- type: nauc_ndcg_at_10_diff1
value: 2.0956279601914227
- type: nauc_ndcg_at_10_max
value: 48.868693823231304
- type: nauc_ndcg_at_10_std
value: 70.16734895474447
- type: nauc_ndcg_at_1_diff1
value: 27.89880129091742
- type: nauc_ndcg_at_1_max
value: 44.14668818195789
- type: nauc_ndcg_at_1_std
value: 60.28699861687413
- type: nauc_ndcg_at_20_diff1
value: -3.5946895305356623
- type: nauc_ndcg_at_20_max
value: 46.68859141418255
- type: nauc_ndcg_at_20_std
value: 70.27067652865686
- type: nauc_ndcg_at_3_diff1
value: 7.400409149522286
- type: nauc_ndcg_at_3_max
value: 45.61078758588923
- type: nauc_ndcg_at_3_std
value: 62.06453130401961
- type: nauc_ndcg_at_5_diff1
value: 5.830725665736509
- type: nauc_ndcg_at_5_max
value: 46.62678021725239
- type: nauc_ndcg_at_5_std
value: 64.28848314363539
- type: nauc_precision_at_1000_diff1
value: -9.666313428844905
- type: nauc_precision_at_1000_max
value: 47.57616298626001
- type: nauc_precision_at_1000_std
value: 49.81803250713608
- type: nauc_precision_at_100_diff1
value: -10.753663329125686
- type: nauc_precision_at_100_max
value: 45.231033820687834
- type: nauc_precision_at_100_std
value: 74.22025319558313
- type: nauc_precision_at_10_diff1
value: -0.9044324563451003
- type: nauc_precision_at_10_max
value: 46.282938258557955
- type: nauc_precision_at_10_std
value: 67.20654075066248
- type: nauc_precision_at_1_diff1
value: 25.34827377347056
- type: nauc_precision_at_1_max
value: 50.58838798996285
- type: nauc_precision_at_1_std
value: 59.36661763433414
- type: nauc_precision_at_20_diff1
value: -5.192190687520166
- type: nauc_precision_at_20_max
value: 39.61181596936397
- type: nauc_precision_at_20_std
value: 65.90673204251821
- type: nauc_precision_at_3_diff1
value: -1.1581585542804733
- type: nauc_precision_at_3_max
value: 48.095238095238116
- type: nauc_precision_at_3_std
value: 57.79976256430543
- type: nauc_precision_at_5_diff1
value: 3.355915932928888
- type: nauc_precision_at_5_max
value: 43.99987410397438
- type: nauc_precision_at_5_std
value: 62.106083138587906
- type: nauc_recall_at_1000_diff1
value: 3.655993902820825
- type: nauc_recall_at_1000_max
value: 28.761919544640335
- type: nauc_recall_at_1000_std
value: 61.94123910402753
- type: nauc_recall_at_100_diff1
value: 2.5155941410242977
- type: nauc_recall_at_100_max
value: 9.499702402437284
- type: nauc_recall_at_100_std
value: 52.57449917231589
- type: nauc_recall_at_10_diff1
value: 5.939411921276368
- type: nauc_recall_at_10_max
value: 4.994244760738587
- type: nauc_recall_at_10_std
value: 33.64383950012248
- type: nauc_recall_at_1_diff1
value: 19.385947791364455
- type: nauc_recall_at_1_max
value: -2.017784609408856
- type: nauc_recall_at_1_std
value: 15.846915472515105
- type: nauc_recall_at_20_diff1
value: 3.339213533105717
- type: nauc_recall_at_20_max
value: 1.4182715611821584
- type: nauc_recall_at_20_std
value: 36.13152761959804
- type: nauc_recall_at_3_diff1
value: 2.9154975009752775
- type: nauc_recall_at_3_max
value: 5.418186566728512
- type: nauc_recall_at_3_std
value: 24.420940449950507
- type: nauc_recall_at_5_diff1
value: 7.4799616256209305
- type: nauc_recall_at_5_max
value: 2.1601588551873823
- type: nauc_recall_at_5_std
value: 28.09415304774757
- type: ndcg_at_1
value: 72.0
- type: ndcg_at_10
value: 58.492
- type: ndcg_at_100
value: 45.437
- type: ndcg_at_1000
value: 44.108999999999995
- type: ndcg_at_20
value: 54.969
- type: ndcg_at_3
value: 64.93900000000001
- type: ndcg_at_5
value: 60.736999999999995
- type: precision_at_1
value: 76.0
- type: precision_at_10
value: 61.199999999999996
- type: precision_at_100
value: 46.839999999999996
- type: precision_at_1000
value: 19.666
- type: precision_at_20
value: 56.8
- type: precision_at_3
value: 68.0
- type: precision_at_5
value: 62.8
- type: recall_at_1
value: 0.20600000000000002
- type: recall_at_10
value: 1.5939999999999999
- type: recall_at_100
value: 11.498
- type: recall_at_1000
value: 42.729
- type: recall_at_20
value: 2.922
- type: recall_at_3
value: 0.5309999999999999
- type: recall_at_5
value: 0.8370000000000001
task:
type: Retrieval
- dataset:
config: default
name: MTEB ToxicConversationsClassification (default)
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
split: test
type: mteb/toxic_conversations_50k
metrics:
- type: accuracy
value: 85.9521484375
- type: ap
value: 30.374730390938566
- type: ap_weighted
value: 30.374730390938566
- type: f1
value: 70.3917271343218
- type: f1_weighted
value: 88.45609971763992
- type: main_score
value: 85.9521484375
task:
type: Classification
- dataset:
config: default
name: MTEB TweetSentimentExtractionClassification (default)
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
split: test
type: mteb/tweet_sentiment_extraction
metrics:
- type: accuracy
value: 80.12733446519525
- type: f1
value: 80.418094849412
- type: f1_weighted
value: 80.10847441279616
- type: main_score
value: 80.12733446519525
task:
type: Classification
- dataset:
config: default
name: MTEB TwentyNewsgroupsClustering (default)
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
split: test
type: mteb/twentynewsgroups-clustering
metrics:
- type: main_score
value: 64.6036121602603
- type: v_measure
value: 64.6036121602603
- type: v_measure_std
value: 1.2991377356017484
task:
type: Clustering
- dataset:
config: default
name: MTEB TwitterSemEval2015 (default)
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
split: test
type: mteb/twittersemeval2015-pairclassification
metrics:
- type: cosine_accuracy
value: 87.86433808189784
- type: cosine_accuracy_threshold
value: 85.5525255203247
- type: cosine_ap
value: 78.93155350890012
- type: cosine_f1
value: 71.80031864046734
- type: cosine_f1_threshold
value: 83.99585485458374
- type: cosine_precision
value: 72.26082308925709
- type: cosine_recall
value: 71.34564643799473
- type: dot_accuracy
value: 87.86433808189784
- type: dot_accuracy_threshold
value: 85.55253744125366
- type: dot_ap
value: 78.93157147282707
- type: dot_f1
value: 71.80031864046734
- type: dot_f1_threshold
value: 83.99585485458374
- type: dot_precision
value: 72.26082308925709
- type: dot_recall
value: 71.34564643799473
- type: euclidean_accuracy
value: 87.86433808189784
- type: euclidean_accuracy_threshold
value: 53.75403165817261
- type: euclidean_ap
value: 78.93157128337329
- type: euclidean_f1
value: 71.80031864046734
- type: euclidean_f1_threshold
value: 56.575870513916016
- type: euclidean_precision
value: 72.26082308925709
- type: euclidean_recall
value: 71.34564643799473
- type: main_score
value: 79.12654131533807
- type: manhattan_accuracy
value: 87.98950944745782
- type: manhattan_accuracy_threshold
value: 2512.5680923461914
- type: manhattan_ap
value: 79.12654131533807
- type: manhattan_f1
value: 71.90745366110163
- type: manhattan_f1_threshold
value: 2624.722671508789
- type: manhattan_precision
value: 71.65313073094053
- type: manhattan_recall
value: 72.16358839050132
- type: max_accuracy
value: 87.98950944745782
- type: max_ap
value: 79.12654131533807
- type: max_f1
value: 71.90745366110163
- type: max_precision
value: 72.26082308925709
- type: max_recall
value: 72.16358839050132
- type: similarity_accuracy
value: 87.86433808189784
- type: similarity_accuracy_threshold
value: 85.5525255203247
- type: similarity_ap
value: 78.93155350890012
- type: similarity_f1
value: 71.80031864046734
- type: similarity_f1_threshold
value: 83.99585485458374
- type: similarity_precision
value: 72.26082308925709
- type: similarity_recall
value: 71.34564643799473
task:
type: PairClassification
- dataset:
config: default
name: MTEB TwitterURLCorpus (default)
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
split: test
type: mteb/twitterurlcorpus-pairclassification
metrics:
- type: cosine_accuracy
value: 89.03248340901153
- type: cosine_accuracy_threshold
value: 84.39068794250488
- type: cosine_ap
value: 85.87150718008797
- type: cosine_f1
value: 78.39147286821706
- type: cosine_f1_threshold
value: 82.88650512695312
- type: cosine_precision
value: 75.96792834440913
- type: cosine_recall
value: 80.97474591931014
- type: dot_accuracy
value: 89.03248340901153
- type: dot_accuracy_threshold
value: 84.39069986343384
- type: dot_ap
value: 85.87150946221163
- type: dot_f1
value: 78.39147286821706
- type: dot_f1_threshold
value: 82.88650512695312
- type: dot_precision
value: 75.96792834440913
- type: dot_recall
value: 80.97474591931014
- type: euclidean_accuracy
value: 89.03248340901153
- type: euclidean_accuracy_threshold
value: 55.873626470565796
- type: euclidean_ap
value: 85.87151445202907
- type: euclidean_f1
value: 78.39147286821706
- type: euclidean_f1_threshold
value: 58.5038423538208
- type: euclidean_precision
value: 75.96792834440913
- type: euclidean_recall
value: 80.97474591931014
- type: main_score
value: 85.95871260636034
- type: manhattan_accuracy
value: 89.09069740365584
- type: manhattan_accuracy_threshold
value: 2603.150749206543
- type: manhattan_ap
value: 85.95871260636034
- type: manhattan_f1
value: 78.53649430651484
- type: manhattan_f1_threshold
value: 2714.5809173583984
- type: manhattan_precision
value: 76.23396390519677
- type: manhattan_recall
value: 80.9824453341546
- type: max_accuracy
value: 89.09069740365584
- type: max_ap
value: 85.95871260636034
- type: max_f1
value: 78.53649430651484
- type: max_precision
value: 76.23396390519677
- type: max_recall
value: 80.9824453341546
- type: similarity_accuracy
value: 89.03248340901153
- type: similarity_accuracy_threshold
value: 84.39068794250488
- type: similarity_ap
value: 85.87150718008797
- type: similarity_f1
value: 78.39147286821706
- type: similarity_f1_threshold
value: 82.88650512695312
- type: similarity_precision
value: 75.96792834440913
- type: similarity_recall
value: 80.97474591931014
task:
type: PairClassification
tags:
- mteb
---
|
ctu-aic/flan-t5-large | ctu-aic | "2023-08-07T14:27:58Z" | 71 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"feature-extraction",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2023-08-07T14:00:29Z" | This model's tokenizer is extended with CS, SK and PL accents using the following code:
````python
from transformers import (
AutoModel,
AutoTokenizer,
)
model_id = "google/flan-t5-large"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModel.from_pretrained(model_id)
accents = "áčďéěíňóřšťúůýž" # CS
accents += "ąćęłńóśźż" # PL
accents += "áäčďéíĺľňóôŕšťúýž" # SK
accents += accents.upper()
accents = set(c for c in accents)
new_tokens = accents - set(tokenizer.vocab.keys())
tokenizer.add_tokens(list(new_tokens))
model.resize_token_embeddings(len(tokenizer))
```` |
LarryAIDraw/iono_pokemon | LarryAIDraw | "2023-12-09T15:56:23Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-12-09T15:47:24Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/226468/iono-nanjyamo-pokemon-or-goofy-ai |
shrenikb/abla3 | shrenikb | "2024-06-10T00:35:59Z" | 4 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:shrenikb/sparsegpt75sparsitymodel",
"base_model:adapter:shrenikb/sparsegpt75sparsitymodel",
"region:us"
] | null | "2024-06-10T00:07:52Z" | ---
library_name: peft
base_model: shrenikb/sparsegpt75sparsitymodel
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 |
shopitalic/elliot-navy-cashmere-short-sleeve-polo-rafael | shopitalic | "2025-03-04T17:07:40Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-03-04T17:07:31Z" | ---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt:
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# elliot navy cashmere short sleeve polo rafael
<Gallery />
## Model description
## Trigger words
You should use `` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/shopitalic/elliot-navy-cashmere-short-sleeve-polo-rafael/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
hgnoi/ygsPT9KCCH3gbiy0 | hgnoi | "2024-05-22T13:44:30Z" | 126 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-22T13:42:53Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
isspek/roberta-base_ebola_gpt4o_1_2e-5_16_weight | isspek | "2025-03-23T14:31:39Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-23T14:31:25Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dynapp/lora_model_cloud_tuned_combined | dynapp | "2025-02-01T20:30:39Z" | 27 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gguf",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-01T17:33:45Z" | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** dynapp
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Qwen2.5-VL-3B-Instruct-i1-GGUF | mradermacher | "2025-04-02T10:54:37Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-VL-3B-Instruct",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-04-02T10:35:06Z" | ---
base_model: Qwen/Qwen2.5-VL-3B-Instruct
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-VL-3B-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-3B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-3B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-3B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-3B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-3B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-3B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-3B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-3B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-3B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-3B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-3B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-3B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-3B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-3B-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-3B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-3B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 1.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-3B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-3B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-3B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-3B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-3B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-3B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-3B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-3B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-3B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-3B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-3B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-3B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-3B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-3B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-3B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-3B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-3B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-3B-Instruct.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-3B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-3B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 1.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-3B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-3B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-3B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-3B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-3B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-3B-Instruct.i1-Q4_1.gguf) | i1-Q4_1 | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-3B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-3B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-3B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-3B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-3B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-3B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 2.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
rororo1010/Test_tyatyaa | rororo1010 | "2023-10-02T09:42:48Z" | 0 | 2 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-05-22T21:47:37Z" | ---
license: creativeml-openrail-m
---
|
mradermacher/Alpacino30b-GGUF | mradermacher | "2024-05-06T05:12:21Z" | 71 | 0 | transformers | [
"transformers",
"gguf",
"alpaca",
"en",
"base_model:digitous/Alpacino30b",
"base_model:quantized:digitous/Alpacino30b",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2024-04-06T11:56:58Z" | ---
base_model: digitous/Alpacino30b
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- alpaca
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/digitous/Alpacino30b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Alpacino30b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.Q2_K.gguf) | Q2_K | 12.1 | |
| [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.IQ3_XS.gguf) | IQ3_XS | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.IQ3_S.gguf) | IQ3_S | 14.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.Q3_K_S.gguf) | Q3_K_S | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.IQ3_M.gguf) | IQ3_M | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.Q3_K_M.gguf) | Q3_K_M | 15.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.Q3_K_L.gguf) | Q3_K_L | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.IQ4_XS.gguf) | IQ4_XS | 17.6 | |
| [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.Q4_K_S.gguf) | Q4_K_S | 18.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.Q4_K_M.gguf) | Q4_K_M | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.Q5_K_S.gguf) | Q5_K_S | 22.5 | |
| [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.Q5_K_M.gguf) | Q5_K_M | 23.1 | |
| [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.Q6_K.gguf) | Q6_K | 26.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Alpacino30b-GGUF/resolve/main/Alpacino30b.Q8_0.gguf) | Q8_0 | 34.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ishaq101/llama3-8b-finetune-4bit-lora | ishaq101 | "2024-10-07T11:00:24Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:ishaq101/llama3-8b-finetune-4bit",
"base_model:quantized:ishaq101/llama3-8b-finetune-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-10-07T10:58:53Z" | ---
base_model: ishaq101/llama3-8b-finetune-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** ishaq101
- **License:** apache-2.0
- **Finetuned from model :** ishaq101/llama3-8b-finetune-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
NiharGupte/resnet-50-finetuned-student_two_classes | NiharGupte | "2024-05-04T07:11:35Z" | 210 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"resnet",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-05-04T07:06:25Z" | ---
license: apache-2.0
base_model: microsoft/resnet-50
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: resnet-50-finetuned-student_two_classes
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.85
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-student_two_classes
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4531
- Accuracy: 0.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5955 | 1.0 | 13 | 0.4665 | 0.85 |
| 0.5303 | 2.0 | 26 | 0.4790 | 0.85 |
| 0.6127 | 3.0 | 39 | 0.4787 | 0.85 |
| 0.5025 | 4.0 | 52 | 0.4547 | 0.85 |
| 0.471 | 5.0 | 65 | 0.4621 | 0.85 |
| 0.4673 | 6.0 | 78 | 0.4775 | 0.86 |
| 0.4492 | 7.0 | 91 | 0.4648 | 0.86 |
| 0.4144 | 8.0 | 104 | 0.4733 | 0.85 |
| 0.4963 | 9.0 | 117 | 0.4575 | 0.85 |
| 0.4149 | 10.0 | 130 | 0.4691 | 0.85 |
| 0.4588 | 11.0 | 143 | 0.4596 | 0.84 |
| 0.3995 | 12.0 | 156 | 0.4754 | 0.85 |
| 0.359 | 13.0 | 169 | 0.4616 | 0.85 |
| 0.4246 | 14.0 | 182 | 0.4552 | 0.85 |
| 0.4001 | 15.0 | 195 | 0.4839 | 0.85 |
| 0.3919 | 16.0 | 208 | 0.4708 | 0.85 |
| 0.4137 | 17.0 | 221 | 0.4416 | 0.85 |
| 0.3912 | 18.0 | 234 | 0.4507 | 0.85 |
| 0.4322 | 19.0 | 247 | 0.4237 | 0.85 |
| 0.4043 | 20.0 | 260 | 0.4531 | 0.85 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
texanrangee/59c70021-8f2f-4e30-8998-b0f6230af9b4 | texanrangee | "2025-03-16T22:08:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-16T16:26:35Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lier007/xiaobu-embedding-v2 | lier007 | "2025-01-03T10:07:40Z" | 1,842 | 49 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"onnx",
"safetensors",
"bert",
"mteb",
"arxiv:2002.10857",
"model-index",
"region:us"
] | null | "2024-06-30T13:01:04Z" | ---
tags:
- mteb
model-index:
- name: piccolo-embedding_mixed2
results:
- task:
type: STS
dataset:
type: C-MTEB/AFQMC
name: MTEB AFQMC
config: default
split: validation
revision: None
metrics:
- type: cos_sim_pearson
value: 56.918538280469875
- type: cos_sim_spearman
value: 60.95597435855258
- type: euclidean_pearson
value: 59.73821610051437
- type: euclidean_spearman
value: 60.956778530262454
- type: manhattan_pearson
value: 59.739675774225475
- type: manhattan_spearman
value: 60.95243600302903
- task:
type: STS
dataset:
type: C-MTEB/ATEC
name: MTEB ATEC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 56.79417977023184
- type: cos_sim_spearman
value: 58.80984726256814
- type: euclidean_pearson
value: 63.42225182281334
- type: euclidean_spearman
value: 58.80957930593542
- type: manhattan_pearson
value: 63.41128425333986
- type: manhattan_spearman
value: 58.80784321716389
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (zh)
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 50.074000000000005
- type: f1
value: 47.11468271375511
- task:
type: STS
dataset:
type: C-MTEB/BQ
name: MTEB BQ
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 73.3412976021806
- type: cos_sim_spearman
value: 75.0799965464816
- type: euclidean_pearson
value: 73.7874729086686
- type: euclidean_spearman
value: 75.07910973646369
- type: manhattan_pearson
value: 73.7716616949607
- type: manhattan_spearman
value: 75.06089549008017
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringP2P
name: MTEB CLSClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 60.4206935177474
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringS2S
name: MTEB CLSClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 49.53654617222264
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv1-reranking
name: MTEB CMedQAv1
config: default
split: test
revision: None
metrics:
- type: map
value: 90.96386786978509
- type: mrr
value: 92.8897619047619
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv2-reranking
name: MTEB CMedQAv2
config: default
split: test
revision: None
metrics:
- type: map
value: 90.41014127763198
- type: mrr
value: 92.45039682539682
- task:
type: Retrieval
dataset:
type: C-MTEB/CmedqaRetrieval
name: MTEB CmedqaRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 26.901999999999997
- type: map_at_10
value: 40.321
- type: map_at_100
value: 42.176
- type: map_at_1000
value: 42.282
- type: map_at_3
value: 35.882
- type: map_at_5
value: 38.433
- type: mrr_at_1
value: 40.910000000000004
- type: mrr_at_10
value: 49.309999999999995
- type: mrr_at_100
value: 50.239
- type: mrr_at_1000
value: 50.278
- type: mrr_at_3
value: 46.803
- type: mrr_at_5
value: 48.137
- type: ndcg_at_1
value: 40.785
- type: ndcg_at_10
value: 47.14
- type: ndcg_at_100
value: 54.156000000000006
- type: ndcg_at_1000
value: 55.913999999999994
- type: ndcg_at_3
value: 41.669
- type: ndcg_at_5
value: 43.99
- type: precision_at_1
value: 40.785
- type: precision_at_10
value: 10.493
- type: precision_at_100
value: 1.616
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 23.723
- type: precision_at_5
value: 17.249
- type: recall_at_1
value: 26.901999999999997
- type: recall_at_10
value: 58.25
- type: recall_at_100
value: 87.10900000000001
- type: recall_at_1000
value: 98.804
- type: recall_at_3
value: 41.804
- type: recall_at_5
value: 48.884
- task:
type: PairClassification
dataset:
type: C-MTEB/CMNLI
name: MTEB Cmnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 86.42212868310283
- type: cos_sim_ap
value: 92.83788702972741
- type: cos_sim_f1
value: 87.08912233141307
- type: cos_sim_precision
value: 84.24388111888112
- type: cos_sim_recall
value: 90.13327098433481
- type: dot_accuracy
value: 86.44618159951895
- type: dot_ap
value: 92.81146275060858
- type: dot_f1
value: 87.06857911250562
- type: dot_precision
value: 83.60232408005164
- type: dot_recall
value: 90.83469721767594
- type: euclidean_accuracy
value: 86.42212868310283
- type: euclidean_ap
value: 92.83805700492603
- type: euclidean_f1
value: 87.08803611738148
- type: euclidean_precision
value: 84.18066768492254
- type: euclidean_recall
value: 90.20341360766892
- type: manhattan_accuracy
value: 86.28983764281419
- type: manhattan_ap
value: 92.82818970981005
- type: manhattan_f1
value: 87.12625521832335
- type: manhattan_precision
value: 84.19101613606628
- type: manhattan_recall
value: 90.27355623100304
- type: max_accuracy
value: 86.44618159951895
- type: max_ap
value: 92.83805700492603
- type: max_f1
value: 87.12625521832335
- task:
type: Retrieval
dataset:
type: C-MTEB/CovidRetrieval
name: MTEB CovidRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 79.215
- type: map_at_10
value: 86.516
- type: map_at_100
value: 86.6
- type: map_at_1000
value: 86.602
- type: map_at_3
value: 85.52
- type: map_at_5
value: 86.136
- type: mrr_at_1
value: 79.663
- type: mrr_at_10
value: 86.541
- type: mrr_at_100
value: 86.625
- type: mrr_at_1000
value: 86.627
- type: mrr_at_3
value: 85.564
- type: mrr_at_5
value: 86.15899999999999
- type: ndcg_at_1
value: 79.663
- type: ndcg_at_10
value: 89.399
- type: ndcg_at_100
value: 89.727
- type: ndcg_at_1000
value: 89.781
- type: ndcg_at_3
value: 87.402
- type: ndcg_at_5
value: 88.479
- type: precision_at_1
value: 79.663
- type: precision_at_10
value: 9.926
- type: precision_at_100
value: 1.006
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 31.226
- type: precision_at_5
value: 19.283
- type: recall_at_1
value: 79.215
- type: recall_at_10
value: 98.209
- type: recall_at_100
value: 99.579
- type: recall_at_1000
value: 100
- type: recall_at_3
value: 92.703
- type: recall_at_5
value: 95.364
- task:
type: Retrieval
dataset:
type: C-MTEB/DuRetrieval
name: MTEB DuRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 27.391
- type: map_at_10
value: 82.82000000000001
- type: map_at_100
value: 85.5
- type: map_at_1000
value: 85.533
- type: map_at_3
value: 57.802
- type: map_at_5
value: 72.82600000000001
- type: mrr_at_1
value: 92.80000000000001
- type: mrr_at_10
value: 94.83500000000001
- type: mrr_at_100
value: 94.883
- type: mrr_at_1000
value: 94.884
- type: mrr_at_3
value: 94.542
- type: mrr_at_5
value: 94.729
- type: ndcg_at_1
value: 92.7
- type: ndcg_at_10
value: 89.435
- type: ndcg_at_100
value: 91.78699999999999
- type: ndcg_at_1000
value: 92.083
- type: ndcg_at_3
value: 88.595
- type: ndcg_at_5
value: 87.53
- type: precision_at_1
value: 92.7
- type: precision_at_10
value: 42.4
- type: precision_at_100
value: 4.823
- type: precision_at_1000
value: 0.48900000000000005
- type: precision_at_3
value: 79.133
- type: precision_at_5
value: 66.8
- type: recall_at_1
value: 27.391
- type: recall_at_10
value: 90.069
- type: recall_at_100
value: 97.875
- type: recall_at_1000
value: 99.436
- type: recall_at_3
value: 59.367999999999995
- type: recall_at_5
value: 76.537
- task:
type: Retrieval
dataset:
type: C-MTEB/EcomRetrieval
name: MTEB EcomRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 54.800000000000004
- type: map_at_10
value: 65.289
- type: map_at_100
value: 65.845
- type: map_at_1000
value: 65.853
- type: map_at_3
value: 62.766999999999996
- type: map_at_5
value: 64.252
- type: mrr_at_1
value: 54.800000000000004
- type: mrr_at_10
value: 65.255
- type: mrr_at_100
value: 65.81700000000001
- type: mrr_at_1000
value: 65.824
- type: mrr_at_3
value: 62.683
- type: mrr_at_5
value: 64.248
- type: ndcg_at_1
value: 54.800000000000004
- type: ndcg_at_10
value: 70.498
- type: ndcg_at_100
value: 72.82300000000001
- type: ndcg_at_1000
value: 73.053
- type: ndcg_at_3
value: 65.321
- type: ndcg_at_5
value: 67.998
- type: precision_at_1
value: 54.800000000000004
- type: precision_at_10
value: 8.690000000000001
- type: precision_at_100
value: 0.97
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 24.233
- type: precision_at_5
value: 15.840000000000002
- type: recall_at_1
value: 54.800000000000004
- type: recall_at_10
value: 86.9
- type: recall_at_100
value: 97
- type: recall_at_1000
value: 98.9
- type: recall_at_3
value: 72.7
- type: recall_at_5
value: 79.2
- task:
type: Classification
dataset:
type: C-MTEB/IFlyTek-classification
name: MTEB IFlyTek
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 51.758368603308966
- type: f1
value: 40.249503783871596
- task:
type: Classification
dataset:
type: C-MTEB/JDReview-classification
name: MTEB JDReview
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 89.08067542213884
- type: ap
value: 60.31281895139249
- type: f1
value: 84.20883153932607
- task:
type: STS
dataset:
type: C-MTEB/LCQMC
name: MTEB LCQMC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 74.04193577551248
- type: cos_sim_spearman
value: 79.81875884845549
- type: euclidean_pearson
value: 80.02581187503708
- type: euclidean_spearman
value: 79.81877215060574
- type: manhattan_pearson
value: 80.01767830530258
- type: manhattan_spearman
value: 79.81178852172727
- task:
type: Reranking
dataset:
type: C-MTEB/Mmarco-reranking
name: MTEB MMarcoReranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 39.90939429947956
- type: mrr
value: 39.71071428571429
- task:
type: Retrieval
dataset:
type: C-MTEB/MMarcoRetrieval
name: MTEB MMarcoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 68.485
- type: map_at_10
value: 78.27199999999999
- type: map_at_100
value: 78.54100000000001
- type: map_at_1000
value: 78.546
- type: map_at_3
value: 76.339
- type: map_at_5
value: 77.61099999999999
- type: mrr_at_1
value: 70.80199999999999
- type: mrr_at_10
value: 78.901
- type: mrr_at_100
value: 79.12400000000001
- type: mrr_at_1000
value: 79.128
- type: mrr_at_3
value: 77.237
- type: mrr_at_5
value: 78.323
- type: ndcg_at_1
value: 70.759
- type: ndcg_at_10
value: 82.191
- type: ndcg_at_100
value: 83.295
- type: ndcg_at_1000
value: 83.434
- type: ndcg_at_3
value: 78.57600000000001
- type: ndcg_at_5
value: 80.715
- type: precision_at_1
value: 70.759
- type: precision_at_10
value: 9.951
- type: precision_at_100
value: 1.049
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 29.660999999999998
- type: precision_at_5
value: 18.94
- type: recall_at_1
value: 68.485
- type: recall_at_10
value: 93.65
- type: recall_at_100
value: 98.434
- type: recall_at_1000
value: 99.522
- type: recall_at_3
value: 84.20100000000001
- type: recall_at_5
value: 89.261
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-CN)
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 77.45460659045055
- type: f1
value: 73.84987702455533
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-CN)
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 85.29926025554808
- type: f1
value: 84.40636286569843
- task:
type: Retrieval
dataset:
type: C-MTEB/MedicalRetrieval
name: MTEB MedicalRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 57.599999999999994
- type: map_at_10
value: 64.691
- type: map_at_100
value: 65.237
- type: map_at_1000
value: 65.27
- type: map_at_3
value: 62.733000000000004
- type: map_at_5
value: 63.968
- type: mrr_at_1
value: 58.099999999999994
- type: mrr_at_10
value: 64.952
- type: mrr_at_100
value: 65.513
- type: mrr_at_1000
value: 65.548
- type: mrr_at_3
value: 63
- type: mrr_at_5
value: 64.235
- type: ndcg_at_1
value: 57.599999999999994
- type: ndcg_at_10
value: 68.19
- type: ndcg_at_100
value: 70.98400000000001
- type: ndcg_at_1000
value: 71.811
- type: ndcg_at_3
value: 64.276
- type: ndcg_at_5
value: 66.47999999999999
- type: precision_at_1
value: 57.599999999999994
- type: precision_at_10
value: 7.920000000000001
- type: precision_at_100
value: 0.9259999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 22.900000000000002
- type: precision_at_5
value: 14.799999999999999
- type: recall_at_1
value: 57.599999999999994
- type: recall_at_10
value: 79.2
- type: recall_at_100
value: 92.60000000000001
- type: recall_at_1000
value: 99
- type: recall_at_3
value: 68.7
- type: recall_at_5
value: 74
- task:
type: Classification
dataset:
type: C-MTEB/MultilingualSentiment-classification
name: MTEB MultilingualSentiment
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 79.45
- type: f1
value: 79.25610578280538
- task:
type: PairClassification
dataset:
type: C-MTEB/OCNLI
name: MTEB Ocnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 85.43584190579317
- type: cos_sim_ap
value: 90.89979725191012
- type: cos_sim_f1
value: 86.48383937316358
- type: cos_sim_precision
value: 80.6392694063927
- type: cos_sim_recall
value: 93.24181626187962
- type: dot_accuracy
value: 85.38170005414185
- type: dot_ap
value: 90.87532457866699
- type: dot_f1
value: 86.48383937316358
- type: dot_precision
value: 80.6392694063927
- type: dot_recall
value: 93.24181626187962
- type: euclidean_accuracy
value: 85.43584190579317
- type: euclidean_ap
value: 90.90126652086121
- type: euclidean_f1
value: 86.48383937316358
- type: euclidean_precision
value: 80.6392694063927
- type: euclidean_recall
value: 93.24181626187962
- type: manhattan_accuracy
value: 85.43584190579317
- type: manhattan_ap
value: 90.87896997853466
- type: manhattan_f1
value: 86.47581441263573
- type: manhattan_precision
value: 81.18628359592215
- type: manhattan_recall
value: 92.5026399155227
- type: max_accuracy
value: 85.43584190579317
- type: max_ap
value: 90.90126652086121
- type: max_f1
value: 86.48383937316358
- task:
type: Classification
dataset:
type: C-MTEB/OnlineShopping-classification
name: MTEB OnlineShopping
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 94.9
- type: ap
value: 93.1468223150745
- type: f1
value: 94.88918689508299
- task:
type: STS
dataset:
type: C-MTEB/PAWSX
name: MTEB PAWSX
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 40.4831743182905
- type: cos_sim_spearman
value: 47.4163675550491
- type: euclidean_pearson
value: 46.456319899274924
- type: euclidean_spearman
value: 47.41567079730661
- type: manhattan_pearson
value: 46.48561639930895
- type: manhattan_spearman
value: 47.447721653461215
- task:
type: STS
dataset:
type: C-MTEB/QBQTC
name: MTEB QBQTC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 42.96423587663398
- type: cos_sim_spearman
value: 45.13742225167858
- type: euclidean_pearson
value: 39.275452114075435
- type: euclidean_spearman
value: 45.137763540967406
- type: manhattan_pearson
value: 39.24797626417764
- type: manhattan_spearman
value: 45.13817773119268
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh)
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 66.26687809086202
- type: cos_sim_spearman
value: 66.9569145816897
- type: euclidean_pearson
value: 65.72390780809788
- type: euclidean_spearman
value: 66.95406938095539
- type: manhattan_pearson
value: 65.6220809000381
- type: manhattan_spearman
value: 66.88531036320953
- task:
type: STS
dataset:
type: C-MTEB/STSB
name: MTEB STSB
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 80.30831700726195
- type: cos_sim_spearman
value: 82.05184068558792
- type: euclidean_pearson
value: 81.73198597791563
- type: euclidean_spearman
value: 82.05326103582206
- type: manhattan_pearson
value: 81.70886400949136
- type: manhattan_spearman
value: 82.03473274756037
- task:
type: Reranking
dataset:
type: C-MTEB/T2Reranking
name: MTEB T2Reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 69.03398835347575
- type: mrr
value: 79.9212528613341
- task:
type: Retrieval
dataset:
type: C-MTEB/T2Retrieval
name: MTEB T2Retrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 27.515
- type: map_at_10
value: 77.40599999999999
- type: map_at_100
value: 81.087
- type: map_at_1000
value: 81.148
- type: map_at_3
value: 54.327000000000005
- type: map_at_5
value: 66.813
- type: mrr_at_1
value: 89.764
- type: mrr_at_10
value: 92.58
- type: mrr_at_100
value: 92.663
- type: mrr_at_1000
value: 92.666
- type: mrr_at_3
value: 92.15299999999999
- type: mrr_at_5
value: 92.431
- type: ndcg_at_1
value: 89.777
- type: ndcg_at_10
value: 85.013
- type: ndcg_at_100
value: 88.62100000000001
- type: ndcg_at_1000
value: 89.184
- type: ndcg_at_3
value: 86.19200000000001
- type: ndcg_at_5
value: 84.909
- type: precision_at_1
value: 89.777
- type: precision_at_10
value: 42.218
- type: precision_at_100
value: 5.032
- type: precision_at_1000
value: 0.517
- type: precision_at_3
value: 75.335
- type: precision_at_5
value: 63.199000000000005
- type: recall_at_1
value: 27.515
- type: recall_at_10
value: 84.258
- type: recall_at_100
value: 95.908
- type: recall_at_1000
value: 98.709
- type: recall_at_3
value: 56.189
- type: recall_at_5
value: 70.50800000000001
- task:
type: Classification
dataset:
type: C-MTEB/TNews-classification
name: MTEB TNews
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 54.635999999999996
- type: f1
value: 52.63073912739558
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringP2P
name: MTEB ThuNewsClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 78.75676284855221
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringS2S
name: MTEB ThuNewsClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 71.95583733802839
- task:
type: Retrieval
dataset:
type: C-MTEB/VideoRetrieval
name: MTEB VideoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 64.9
- type: map_at_10
value: 75.622
- type: map_at_100
value: 75.93900000000001
- type: map_at_1000
value: 75.93900000000001
- type: map_at_3
value: 73.933
- type: map_at_5
value: 74.973
- type: mrr_at_1
value: 65
- type: mrr_at_10
value: 75.676
- type: mrr_at_100
value: 75.994
- type: mrr_at_1000
value: 75.994
- type: mrr_at_3
value: 74.05000000000001
- type: mrr_at_5
value: 75.03999999999999
- type: ndcg_at_1
value: 64.9
- type: ndcg_at_10
value: 80.08999999999999
- type: ndcg_at_100
value: 81.44500000000001
- type: ndcg_at_1000
value: 81.45599999999999
- type: ndcg_at_3
value: 76.688
- type: ndcg_at_5
value: 78.53
- type: precision_at_1
value: 64.9
- type: precision_at_10
value: 9.379999999999999
- type: precision_at_100
value: 0.997
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 28.199999999999996
- type: precision_at_5
value: 17.8
- type: recall_at_1
value: 64.9
- type: recall_at_10
value: 93.8
- type: recall_at_100
value: 99.7
- type: recall_at_1000
value: 99.8
- type: recall_at_3
value: 84.6
- type: recall_at_5
value: 89
- task:
type: Classification
dataset:
type: C-MTEB/waimai-classification
name: MTEB Waimai
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 89.34
- type: ap
value: 75.20638024616892
- type: f1
value: 87.88648489072128
library_name: sentence-transformers
---
# xiaobu-embedding-v2
基于piccolo-embedding[1],主要改动如下:
- 合成数据替换为xiaobu-embedding-v1[2]所积累数据
- 在circle_loss[3]视角下统一处理CMTEB的6类问题,最大优势是可充分利用原始数据集中的多个正例,其次是可一定程度上避免考虑多个不同loss之间的权重问题
## Usage (Sentence-Transformers)
```
pip install -U sentence-transformers
```
相似度计算:
```python
from sentence_transformers import SentenceTransformer
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = SentenceTransformer('lier007/xiaobu-embedding-v2')
embeddings_1 = model.encode(sentences_1, normalize_embeddings=True)
embeddings_2 = model.encode(sentences_2, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
## Reference
1. https://github.com/hjq133/piccolo-embedding
2. https://huggingface.co/lier007/xiaobu-embedding
3. https://arxiv.org/abs/2002.10857 |
dx2102/llama-midi | dx2102 | "2025-04-05T00:28:12Z" | 217 | 4 | null | [
"safetensors",
"llama",
"dataset:amaai-lab/MidiCaps",
"dataset:projectlosangeles/Los-Angeles-MIDI-Dataset",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"region:us"
] | null | "2025-02-11T05:13:51Z" | ---
datasets:
- amaai-lab/MidiCaps
- projectlosangeles/Los-Angeles-MIDI-Dataset
base_model:
- meta-llama/Llama-3.2-1B-Instruct
---
### Write music scores with llama
### Try the model online: https://huggingface.co/spaces/dx2102/llama-midi
This model is finetuned from the `Llama-3.2-1B` language model.
It learns to write MIDI music scores with a text representation.
Optionally, the score title can also be used as a text prompt.
To use this model, you can simply take existing code and replace `meta-llama/Llama-3.2-1B` with `dx2102/llama-midi`.
```python
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="dx2102/llama-midi",
torch_dtype=torch.bfloat16,
device="cuda", # cuda/mps/cpu
)
txt = pipe(
'''
Bach
pitch duration wait velocity instrument
'''.strip(),
max_length=100,
temperature=1.0,
top_p=1.0,
)
print(txt)
```
To convert the text representation back to a midi file, try this:
```bash
# install this midi library
pip install symusic
```
```python
import symusic
# For example
txt = '''pitch duration wait velocity instrument
71 1310 0 20 0
48 330 350 20 0
55 330 350 20 0
64 1310 690 20 0
74 660 690 20 0
69 1310 0 20 0
48 330 350 20 0
57 330 350 20 0
66 1310 690 20 0
67 330 350 20 0
69 330 350 20 0
71 1310 0 20 0
48 330 350 20 0
55 330 350 20 0
64 1310 690 20 0
74 660 690 20 0
69 1970 0 20 0
48 330 350 20 0
'''
def postprocess(txt, path):
# assert txt.startswith(prompt)
txt = txt.split('\n\n')[-1]
tracks = {}
now = 0
# we need to ignore the invalid output by the model
try:
for line in txt.split('\n'):
pitch, duration, wait, velocity, instrument = line.split()
pitch, duration, wait, velocity = [int(x) for x in [pitch, duration, wait, velocity]]
if instrument not in tracks:
tracks[instrument] = symusic.core.TrackSecond()
if instrument != 'drum':
tracks[instrument].program = int(instrument)
else:
tracks[instrument].is_drum = True
# Eg. Note(time=7.47, duration=5.25, pitch=43, velocity=64, ttype='Second')
tracks[instrument].notes.append(symusic.core.NoteSecond(
time=now/1000,
duration=duration/1000,
pitch=int(pitch),
velocity=int(velocity * 4),
))
now += wait
except Exception as e:
print('Postprocess: Ignored error:', e)
print(f'Postprocess: Got {sum(len(track.notes) for track in tracks.values())} notes')
try:
score = symusic.Score(ttype='Second')
score.tracks.extend(tracks.values())
score.dump_midi(path)
except Exception as e:
print('Postprocess: Ignored postprocessing error:', e)
postprocess(txt, './result.mid')
```
|
lilly9928/LogicLLM | lilly9928 | "2024-06-04T06:24:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-04T06:23:11Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dima806/67_cat_breeds_image_detection | dima806 | "2024-10-19T10:54:29Z" | 228 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vit",
"image-classification",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-09-10T17:40:42Z" | ---
license: apache-2.0
metrics:
- accuracy
base_model:
- google/vit-base-patch16-224-in21k
---
See https://www.kaggle.com/code/dima806/67-cat-breed-image-detection-vit for more details. |
Scalino84/my-flux-face-v2 | Scalino84 | "2024-12-30T13:38:20Z" | 30 | 0 | diffusers | [
"diffusers",
"stable-diffusion-lora",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"en",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-12-30T13:28:39Z" | ---
language: en
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
license: other # oder "unknown" wenn du dir nicht sicher bist
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: xyz person
---
# My Flux Face LoRA Model
This is a custom LoRA model trained for face generation using Stable Diffusion.
## Model Details
- Base Model: runwayml/stable-diffusion-v1-5
- Training Type: LoRA
- Resolution: 512x512
- Trigger Word: "xyz person"
## License
This model is for non-commercial use only.
## Usage
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "Scalino84/my-flux-face-v2"
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipe.load_lora_weights(model_id)
prompt = "a photo of xyz person, professional headshot"
image = pipe(prompt).images[0]
|
pphildan/vit-base-patch16-224-v22 | pphildan | "2023-05-20T09:19:48Z" | 216 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-05-20T08:57:02Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-v22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-v22
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0399
- Accuracy: 0.9856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.282 | 1.0 | 190 | 0.1316 | 0.9530 |
| 0.1866 | 2.0 | 380 | 0.1104 | 0.9644 |
| 0.1409 | 3.0 | 570 | 0.0662 | 0.9781 |
| 0.1085 | 4.0 | 760 | 0.0515 | 0.9826 |
| 0.0655 | 5.0 | 950 | 0.0399 | 0.9856 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
|
Nexspear/7ec63d8b-3416-4ce9-817c-d3af13bec894 | Nexspear | "2025-02-02T18:16:43Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-7b-64k",
"base_model:adapter:NousResearch/Yarn-Llama-2-7b-64k",
"region:us"
] | null | "2025-02-02T18:03:12Z" | ---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-7b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7ec63d8b-3416-4ce9-817c-d3af13bec894
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-7b-64k
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 74aeb5c63551d548_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/74aeb5c63551d548_train_data.json
type:
field_input: ''
field_instruction: name
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: Nexspear/7ec63d8b-3416-4ce9-817c-d3af13bec894
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/74aeb5c63551d548_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 05d6a97d-a07b-424f-b289-aea33906250f
wandb_project: Gradients-On-Four
wandb_run: your_name
wandb_runid: 05d6a97d-a07b-424f-b289-aea33906250f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7ec63d8b-3416-4ce9-817c-d3af13bec894
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-7b-64k](https://huggingface.co/NousResearch/Yarn-Llama-2-7b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.9002 | 0.0061 | 1 | 1.8614 |
| 3.1135 | 0.3030 | 50 | 0.7905 |
| 2.8538 | 0.6061 | 100 | 0.7226 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Subsets and Splits