modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-05-28 18:26:29
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 477
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-05-28 18:24:32
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mikankure/gensyn-checkpoints-whistling_howling_scorpion | mikankure | 2025-05-28T17:42:42Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am whistling howling scorpion",
"unsloth",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-17T02:03:41Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: gensyn-checkpoints-whistling_howling_scorpion
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am whistling howling scorpion
- unsloth
- trl
licence: license
---
# Model Card for gensyn-checkpoints-whistling_howling_scorpion
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mikankure/gensyn-checkpoints-whistling_howling_scorpion", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/MathReasoning_SFT_V1.3-GGUF | mradermacher | 2025-05-28T16:11:31Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:guydebruyn/MathReasoning_SFT_V1.3",
"base_model:quantized:guydebruyn/MathReasoning_SFT_V1.3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-28T16:07:48Z | ---
base_model: guydebruyn/MathReasoning_SFT_V1.3
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/guydebruyn/MathReasoning_SFT_V1.3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MathReasoning_SFT_V1.3-GGUF/resolve/main/MathReasoning_SFT_V1.3.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/MathReasoning_SFT_V1.3-GGUF/resolve/main/MathReasoning_SFT_V1.3.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/MathReasoning_SFT_V1.3-GGUF/resolve/main/MathReasoning_SFT_V1.3.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/MathReasoning_SFT_V1.3-GGUF/resolve/main/MathReasoning_SFT_V1.3.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MathReasoning_SFT_V1.3-GGUF/resolve/main/MathReasoning_SFT_V1.3.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/MathReasoning_SFT_V1.3-GGUF/resolve/main/MathReasoning_SFT_V1.3.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MathReasoning_SFT_V1.3-GGUF/resolve/main/MathReasoning_SFT_V1.3.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MathReasoning_SFT_V1.3-GGUF/resolve/main/MathReasoning_SFT_V1.3.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/MathReasoning_SFT_V1.3-GGUF/resolve/main/MathReasoning_SFT_V1.3.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/MathReasoning_SFT_V1.3-GGUF/resolve/main/MathReasoning_SFT_V1.3.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MathReasoning_SFT_V1.3-GGUF/resolve/main/MathReasoning_SFT_V1.3.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MathReasoning_SFT_V1.3-GGUF/resolve/main/MathReasoning_SFT_V1.3.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
wking669/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fluffy_arctic_reindeer | wking669 | 2025-05-28T16:07:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am fluffy arctic reindeer",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-16T18:09:38Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fluffy_arctic_reindeer
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am fluffy arctic reindeer
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fluffy_arctic_reindeer
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="wking669/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fluffy_arctic_reindeer", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mjs227/llama-rw-sft | mjs227 | 2025-05-28T14:52:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T13:54:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
icefog72/Ice0.122-28.05-RP-4.2bpw | icefog72 | 2025-05-28T14:51:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2025-05-28T14:39:43Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# Ice0.122-28.05-RP
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [NearSwap](https://huggingface.co/alchemonaut/QuartetAnemoi-70B-t0.0001) merge method using F:\FModels\Ice0.121-28.05-RP as a base.
### Models Merged
The following models were included in the merge:
* H:\FModels\Ice0.107-22.04-RP
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: F:\FModels\Ice0.121-28.05-RP
- model: H:\FModels\Ice0.107-22.04-RP
merge_method: nearswap
base_model: F:\FModels\Ice0.121-28.05-RP
parameters:
t: 0.5
dtype: bfloat16
chat_template: "alpaca"
```
|
Diamantis99/K42SR7Z | Diamantis99 | 2025-05-28T14:48:08Z | 0 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
] | image-segmentation | 2025-05-28T14:47:50Z | ---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# DeepLabV3Plus Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "resnext101_32x8d",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"encoder_output_stride": 16,
"decoder_channels": 256,
"decoder_atrous_rates": (12, 24, 36),
"decoder_aspp_separable": True,
"decoder_aspp_dropout": 0.5,
"in_channels": 3,
"classes": 1,
"activation": None,
"upsampling": 4,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.8062993884086609,
"test_dataset_iou": 0.840954065322876
}
]
```
## Dataset
Dataset name: VisionPipe
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
Diamantis99/ZsQOfEy | Diamantis99 | 2025-05-28T14:37:53Z | 0 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
] | image-segmentation | 2025-05-28T14:37:39Z | ---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# DeepLabV3Plus Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "resnet152",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"encoder_output_stride": 16,
"decoder_channels": 256,
"decoder_atrous_rates": (12, 24, 36),
"decoder_aspp_separable": True,
"decoder_aspp_dropout": 0.5,
"in_channels": 3,
"classes": 1,
"activation": None,
"upsampling": 4,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.8171602487564087,
"test_dataset_iou": 0.8524048924446106
}
]
```
## Dataset
Dataset name: VisionPipe
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
Diamantis99/DiYf4Gu | Diamantis99 | 2025-05-28T14:22:52Z | 0 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
] | image-segmentation | 2025-05-28T14:22:32Z | ---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# PSPNet Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "timm-efficientnet-b8",
"encoder_weights": "imagenet",
"encoder_depth": 3,
"psp_out_channels": 512,
"decoder_use_norm": "batchnorm",
"psp_dropout": 0.2,
"in_channels": 3,
"classes": 1,
"activation": None,
"upsampling": 8,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.7539514899253845,
"test_dataset_iou": 0.7939558029174805
}
]
```
## Dataset
Dataset name: VisionPipe
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
phospho-app/OpenLabBA-gr00t-sugar_in_cup_V4-gl60uoo1tc | phospho-app | 2025-05-28T14:15:15Z | 0 | 0 | null | [
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-05-28T12:53:16Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [OpenLabBA/sugar_in_cup_V4](https://huggingface.co/datasets/OpenLabBA/sugar_in_cup_V4)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 49
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
Diamantis99/6qeGHLh | Diamantis99 | 2025-05-28T13:36:53Z | 0 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
] | image-segmentation | 2025-05-28T13:36:30Z | ---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# FPN Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "mit_b5",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"decoder_pyramid_channels": 256,
"decoder_segmentation_channels": 128,
"decoder_merge_policy": "add",
"decoder_dropout": 0.2,
"decoder_interpolation": "nearest",
"in_channels": 3,
"classes": 1,
"activation": None,
"upsampling": 4,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.8480672836303711,
"test_dataset_iou": 0.8878631591796875
}
]
```
## Dataset
Dataset name: VisionPipe
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
lusxvr/nanoVLM | lusxvr | 2025-05-28T13:34:58Z | 2 | 0 | nanovlm | [
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
] | image-text-to-text | 2025-05-23T15:49:20Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model.
For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M.
**Usage:**
Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM.
Follow the install instructions and run the following code:
```python
from models.vision_language_model import VisionLanguageModel
model = VisionLanguageModel.from_pretrained("lusxvr/nanoVLM")
```
|
vidyc/coig_model | vidyc | 2025-05-28T13:29:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T13:28:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jagrati/cve_detail_prediction_model | Jagrati | 2025-05-28T13:23:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-28T13:22:32Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Jagrati
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Alibaba-NLP/gte-Qwen2-1.5B-instruct | Alibaba-NLP | 2025-05-28T13:11:05Z | 163,546 | 216 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"qwen2",
"text-generation",
"mteb",
"transformers",
"Qwen2",
"sentence-similarity",
"custom_code",
"arxiv:2308.03281",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-06-29T08:02:40Z | ---
tags:
- mteb
- sentence-transformers
- transformers
- Qwen2
- sentence-similarity
license: apache-2.0
model-index:
- name: gte-qwen2-7B-instruct
results:
- dataset:
config: en
name: MTEB AmazonCounterfactualClassification (en)
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
split: test
type: mteb/amazon_counterfactual
metrics:
- type: accuracy
value: 83.98507462686567
- type: ap
value: 50.93015252587014
- type: f1
value: 78.50416599051215
task:
type: Classification
- dataset:
config: default
name: MTEB AmazonPolarityClassification
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
split: test
type: mteb/amazon_polarity
metrics:
- type: accuracy
value: 96.61065
- type: ap
value: 94.89174052954196
- type: f1
value: 96.60942596940565
task:
type: Classification
- dataset:
config: en
name: MTEB AmazonReviewsClassification (en)
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
split: test
type: mteb/amazon_reviews_multi
metrics:
- type: accuracy
value: 55.614000000000004
- type: f1
value: 54.90553480294904
task:
type: Classification
- dataset:
config: default
name: MTEB ArguAna
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
split: test
type: mteb/arguana
metrics:
- type: map_at_1
value: 45.164
- type: map_at_10
value: 61.519
- type: map_at_100
value: 61.769
- type: map_at_1000
value: 61.769
- type: map_at_3
value: 57.443999999999996
- type: map_at_5
value: 60.058
- type: mrr_at_1
value: 46.088
- type: mrr_at_10
value: 61.861
- type: mrr_at_100
value: 62.117999999999995
- type: mrr_at_1000
value: 62.117999999999995
- type: mrr_at_3
value: 57.729
- type: mrr_at_5
value: 60.392
- type: ndcg_at_1
value: 45.164
- type: ndcg_at_10
value: 69.72
- type: ndcg_at_100
value: 70.719
- type: ndcg_at_1000
value: 70.719
- type: ndcg_at_3
value: 61.517999999999994
- type: ndcg_at_5
value: 66.247
- type: precision_at_1
value: 45.164
- type: precision_at_10
value: 9.545
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 24.443
- type: precision_at_5
value: 16.97
- type: recall_at_1
value: 45.164
- type: recall_at_10
value: 95.448
- type: recall_at_100
value: 99.644
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 73.329
- type: recall_at_5
value: 84.851
task:
type: Retrieval
- dataset:
config: default
name: MTEB ArxivClusteringP2P
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
split: test
type: mteb/arxiv-clustering-p2p
metrics:
- type: v_measure
value: 50.511868162026175
task:
type: Clustering
- dataset:
config: default
name: MTEB ArxivClusteringS2S
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
split: test
type: mteb/arxiv-clustering-s2s
metrics:
- type: v_measure
value: 45.007803189284004
task:
type: Clustering
- dataset:
config: default
name: MTEB AskUbuntuDupQuestions
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
split: test
type: mteb/askubuntudupquestions-reranking
metrics:
- type: map
value: 64.55292107723382
- type: mrr
value: 77.66158818097877
task:
type: Reranking
- dataset:
config: default
name: MTEB BIOSSES
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
split: test
type: mteb/biosses-sts
metrics:
- type: cos_sim_pearson
value: 85.65459047085452
- type: cos_sim_spearman
value: 82.10729255710761
- type: euclidean_pearson
value: 82.78079159312476
- type: euclidean_spearman
value: 80.50002701880933
- type: manhattan_pearson
value: 82.41372641383016
- type: manhattan_spearman
value: 80.57412509272639
task:
type: STS
- dataset:
config: default
name: MTEB Banking77Classification
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
split: test
type: mteb/banking77
metrics:
- type: accuracy
value: 87.30844155844156
- type: f1
value: 87.25307322443255
task:
type: Classification
- dataset:
config: default
name: MTEB BiorxivClusteringP2P
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
split: test
type: mteb/biorxiv-clustering-p2p
metrics:
- type: v_measure
value: 43.20754608934859
task:
type: Clustering
- dataset:
config: default
name: MTEB BiorxivClusteringS2S
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
split: test
type: mteb/biorxiv-clustering-s2s
metrics:
- type: v_measure
value: 38.818037697335505
task:
type: Clustering
- dataset:
config: default
name: MTEB CQADupstackAndroidRetrieval
revision: f46a197baaae43b4f621051089b82a364682dfeb
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 35.423
- type: map_at_10
value: 47.198
- type: map_at_100
value: 48.899
- type: map_at_1000
value: 49.004
- type: map_at_3
value: 43.114999999999995
- type: map_at_5
value: 45.491
- type: mrr_at_1
value: 42.918
- type: mrr_at_10
value: 53.299
- type: mrr_at_100
value: 54.032000000000004
- type: mrr_at_1000
value: 54.055
- type: mrr_at_3
value: 50.453
- type: mrr_at_5
value: 52.205999999999996
- type: ndcg_at_1
value: 42.918
- type: ndcg_at_10
value: 53.98
- type: ndcg_at_100
value: 59.57
- type: ndcg_at_1000
value: 60.879000000000005
- type: ndcg_at_3
value: 48.224000000000004
- type: ndcg_at_5
value: 50.998
- type: precision_at_1
value: 42.918
- type: precision_at_10
value: 10.299999999999999
- type: precision_at_100
value: 1.687
- type: precision_at_1000
value: 0.211
- type: precision_at_3
value: 22.842000000000002
- type: precision_at_5
value: 16.681
- type: recall_at_1
value: 35.423
- type: recall_at_10
value: 66.824
- type: recall_at_100
value: 89.564
- type: recall_at_1000
value: 97.501
- type: recall_at_3
value: 50.365
- type: recall_at_5
value: 57.921
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackEnglishRetrieval
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 33.205
- type: map_at_10
value: 44.859
- type: map_at_100
value: 46.135
- type: map_at_1000
value: 46.259
- type: map_at_3
value: 41.839
- type: map_at_5
value: 43.662
- type: mrr_at_1
value: 41.146
- type: mrr_at_10
value: 50.621
- type: mrr_at_100
value: 51.207
- type: mrr_at_1000
value: 51.246
- type: mrr_at_3
value: 48.535000000000004
- type: mrr_at_5
value: 49.818
- type: ndcg_at_1
value: 41.146
- type: ndcg_at_10
value: 50.683
- type: ndcg_at_100
value: 54.82
- type: ndcg_at_1000
value: 56.69
- type: ndcg_at_3
value: 46.611000000000004
- type: ndcg_at_5
value: 48.66
- type: precision_at_1
value: 41.146
- type: precision_at_10
value: 9.439
- type: precision_at_100
value: 1.465
- type: precision_at_1000
value: 0.194
- type: precision_at_3
value: 22.59
- type: precision_at_5
value: 15.86
- type: recall_at_1
value: 33.205
- type: recall_at_10
value: 61.028999999999996
- type: recall_at_100
value: 78.152
- type: recall_at_1000
value: 89.59700000000001
- type: recall_at_3
value: 49.05
- type: recall_at_5
value: 54.836
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackGamingRetrieval
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 41.637
- type: map_at_10
value: 55.162
- type: map_at_100
value: 56.142
- type: map_at_1000
value: 56.188
- type: map_at_3
value: 51.564
- type: map_at_5
value: 53.696
- type: mrr_at_1
value: 47.524
- type: mrr_at_10
value: 58.243
- type: mrr_at_100
value: 58.879999999999995
- type: mrr_at_1000
value: 58.9
- type: mrr_at_3
value: 55.69499999999999
- type: mrr_at_5
value: 57.284
- type: ndcg_at_1
value: 47.524
- type: ndcg_at_10
value: 61.305
- type: ndcg_at_100
value: 65.077
- type: ndcg_at_1000
value: 65.941
- type: ndcg_at_3
value: 55.422000000000004
- type: ndcg_at_5
value: 58.516
- type: precision_at_1
value: 47.524
- type: precision_at_10
value: 9.918000000000001
- type: precision_at_100
value: 1.276
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 24.765
- type: precision_at_5
value: 17.204
- type: recall_at_1
value: 41.637
- type: recall_at_10
value: 76.185
- type: recall_at_100
value: 92.149
- type: recall_at_1000
value: 98.199
- type: recall_at_3
value: 60.856
- type: recall_at_5
value: 68.25099999999999
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackGisRetrieval
revision: 5003b3064772da1887988e05400cf3806fe491f2
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 26.27
- type: map_at_10
value: 37.463
- type: map_at_100
value: 38.434000000000005
- type: map_at_1000
value: 38.509
- type: map_at_3
value: 34.226
- type: map_at_5
value: 36.161
- type: mrr_at_1
value: 28.588
- type: mrr_at_10
value: 39.383
- type: mrr_at_100
value: 40.23
- type: mrr_at_1000
value: 40.281
- type: mrr_at_3
value: 36.422
- type: mrr_at_5
value: 38.252
- type: ndcg_at_1
value: 28.588
- type: ndcg_at_10
value: 43.511
- type: ndcg_at_100
value: 48.274
- type: ndcg_at_1000
value: 49.975
- type: ndcg_at_3
value: 37.319
- type: ndcg_at_5
value: 40.568
- type: precision_at_1
value: 28.588
- type: precision_at_10
value: 6.893000000000001
- type: precision_at_100
value: 0.9900000000000001
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 16.347
- type: precision_at_5
value: 11.661000000000001
- type: recall_at_1
value: 26.27
- type: recall_at_10
value: 60.284000000000006
- type: recall_at_100
value: 81.902
- type: recall_at_1000
value: 94.43
- type: recall_at_3
value: 43.537
- type: recall_at_5
value: 51.475
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackMathematicaRetrieval
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 18.168
- type: map_at_10
value: 28.410000000000004
- type: map_at_100
value: 29.78
- type: map_at_1000
value: 29.892999999999997
- type: map_at_3
value: 25.238
- type: map_at_5
value: 26.96
- type: mrr_at_1
value: 23.507
- type: mrr_at_10
value: 33.382
- type: mrr_at_100
value: 34.404
- type: mrr_at_1000
value: 34.467999999999996
- type: mrr_at_3
value: 30.637999999999998
- type: mrr_at_5
value: 32.199
- type: ndcg_at_1
value: 23.507
- type: ndcg_at_10
value: 34.571000000000005
- type: ndcg_at_100
value: 40.663
- type: ndcg_at_1000
value: 43.236000000000004
- type: ndcg_at_3
value: 29.053
- type: ndcg_at_5
value: 31.563999999999997
- type: precision_at_1
value: 23.507
- type: precision_at_10
value: 6.654
- type: precision_at_100
value: 1.113
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 14.427999999999999
- type: precision_at_5
value: 10.498000000000001
- type: recall_at_1
value: 18.168
- type: recall_at_10
value: 48.443000000000005
- type: recall_at_100
value: 74.47
- type: recall_at_1000
value: 92.494
- type: recall_at_3
value: 33.379999999999995
- type: recall_at_5
value: 39.76
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackPhysicsRetrieval
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 32.39
- type: map_at_10
value: 44.479
- type: map_at_100
value: 45.977000000000004
- type: map_at_1000
value: 46.087
- type: map_at_3
value: 40.976
- type: map_at_5
value: 43.038
- type: mrr_at_1
value: 40.135
- type: mrr_at_10
value: 50.160000000000004
- type: mrr_at_100
value: 51.052
- type: mrr_at_1000
value: 51.087
- type: mrr_at_3
value: 47.818
- type: mrr_at_5
value: 49.171
- type: ndcg_at_1
value: 40.135
- type: ndcg_at_10
value: 50.731
- type: ndcg_at_100
value: 56.452000000000005
- type: ndcg_at_1000
value: 58.123000000000005
- type: ndcg_at_3
value: 45.507
- type: ndcg_at_5
value: 48.11
- type: precision_at_1
value: 40.135
- type: precision_at_10
value: 9.192
- type: precision_at_100
value: 1.397
- type: precision_at_1000
value: 0.169
- type: precision_at_3
value: 21.816
- type: precision_at_5
value: 15.476
- type: recall_at_1
value: 32.39
- type: recall_at_10
value: 63.597
- type: recall_at_100
value: 86.737
- type: recall_at_1000
value: 97.039
- type: recall_at_3
value: 48.906
- type: recall_at_5
value: 55.659000000000006
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackProgrammersRetrieval
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 28.397
- type: map_at_10
value: 39.871
- type: map_at_100
value: 41.309000000000005
- type: map_at_1000
value: 41.409
- type: map_at_3
value: 36.047000000000004
- type: map_at_5
value: 38.104
- type: mrr_at_1
value: 34.703
- type: mrr_at_10
value: 44.773
- type: mrr_at_100
value: 45.64
- type: mrr_at_1000
value: 45.678999999999995
- type: mrr_at_3
value: 41.705
- type: mrr_at_5
value: 43.406
- type: ndcg_at_1
value: 34.703
- type: ndcg_at_10
value: 46.271
- type: ndcg_at_100
value: 52.037
- type: ndcg_at_1000
value: 53.81700000000001
- type: ndcg_at_3
value: 39.966
- type: ndcg_at_5
value: 42.801
- type: precision_at_1
value: 34.703
- type: precision_at_10
value: 8.744
- type: precision_at_100
value: 1.348
- type: precision_at_1000
value: 0.167
- type: precision_at_3
value: 19.102
- type: precision_at_5
value: 13.836
- type: recall_at_1
value: 28.397
- type: recall_at_10
value: 60.299
- type: recall_at_100
value: 84.595
- type: recall_at_1000
value: 96.155
- type: recall_at_3
value: 43.065
- type: recall_at_5
value: 50.371
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackRetrieval
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 28.044333333333338
- type: map_at_10
value: 38.78691666666666
- type: map_at_100
value: 40.113
- type: map_at_1000
value: 40.22125
- type: map_at_3
value: 35.52966666666667
- type: map_at_5
value: 37.372749999999996
- type: mrr_at_1
value: 33.159083333333335
- type: mrr_at_10
value: 42.913583333333335
- type: mrr_at_100
value: 43.7845
- type: mrr_at_1000
value: 43.830333333333336
- type: mrr_at_3
value: 40.29816666666667
- type: mrr_at_5
value: 41.81366666666667
- type: ndcg_at_1
value: 33.159083333333335
- type: ndcg_at_10
value: 44.75750000000001
- type: ndcg_at_100
value: 50.13658333333334
- type: ndcg_at_1000
value: 52.037
- type: ndcg_at_3
value: 39.34258333333334
- type: ndcg_at_5
value: 41.93708333333333
- type: precision_at_1
value: 33.159083333333335
- type: precision_at_10
value: 7.952416666666667
- type: precision_at_100
value: 1.2571666666666668
- type: precision_at_1000
value: 0.16099999999999998
- type: precision_at_3
value: 18.303833333333337
- type: precision_at_5
value: 13.057083333333333
- type: recall_at_1
value: 28.044333333333338
- type: recall_at_10
value: 58.237249999999996
- type: recall_at_100
value: 81.35391666666666
- type: recall_at_1000
value: 94.21283333333334
- type: recall_at_3
value: 43.32341666666667
- type: recall_at_5
value: 49.94908333333333
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackStatsRetrieval
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 27.838
- type: map_at_10
value: 36.04
- type: map_at_100
value: 37.113
- type: map_at_1000
value: 37.204
- type: map_at_3
value: 33.585
- type: map_at_5
value: 34.845
- type: mrr_at_1
value: 30.982
- type: mrr_at_10
value: 39.105000000000004
- type: mrr_at_100
value: 39.98
- type: mrr_at_1000
value: 40.042
- type: mrr_at_3
value: 36.912
- type: mrr_at_5
value: 38.062000000000005
- type: ndcg_at_1
value: 30.982
- type: ndcg_at_10
value: 40.982
- type: ndcg_at_100
value: 46.092
- type: ndcg_at_1000
value: 48.25
- type: ndcg_at_3
value: 36.41
- type: ndcg_at_5
value: 38.379999999999995
- type: precision_at_1
value: 30.982
- type: precision_at_10
value: 6.534
- type: precision_at_100
value: 0.9820000000000001
- type: precision_at_1000
value: 0.124
- type: precision_at_3
value: 15.745999999999999
- type: precision_at_5
value: 10.828
- type: recall_at_1
value: 27.838
- type: recall_at_10
value: 52.971000000000004
- type: recall_at_100
value: 76.357
- type: recall_at_1000
value: 91.973
- type: recall_at_3
value: 40.157
- type: recall_at_5
value: 45.147999999999996
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackTexRetrieval
revision: 46989137a86843e03a6195de44b09deda022eec7
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 19.059
- type: map_at_10
value: 27.454
- type: map_at_100
value: 28.736
- type: map_at_1000
value: 28.865000000000002
- type: map_at_3
value: 24.773999999999997
- type: map_at_5
value: 26.266000000000002
- type: mrr_at_1
value: 23.125
- type: mrr_at_10
value: 31.267
- type: mrr_at_100
value: 32.32
- type: mrr_at_1000
value: 32.394
- type: mrr_at_3
value: 28.894
- type: mrr_at_5
value: 30.281000000000002
- type: ndcg_at_1
value: 23.125
- type: ndcg_at_10
value: 32.588
- type: ndcg_at_100
value: 38.432
- type: ndcg_at_1000
value: 41.214
- type: ndcg_at_3
value: 27.938000000000002
- type: ndcg_at_5
value: 30.127
- type: precision_at_1
value: 23.125
- type: precision_at_10
value: 5.9639999999999995
- type: precision_at_100
value: 1.047
- type: precision_at_1000
value: 0.148
- type: precision_at_3
value: 13.294
- type: precision_at_5
value: 9.628
- type: recall_at_1
value: 19.059
- type: recall_at_10
value: 44.25
- type: recall_at_100
value: 69.948
- type: recall_at_1000
value: 89.35300000000001
- type: recall_at_3
value: 31.114000000000004
- type: recall_at_5
value: 36.846000000000004
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackUnixRetrieval
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 28.355999999999998
- type: map_at_10
value: 39.055
- type: map_at_100
value: 40.486
- type: map_at_1000
value: 40.571
- type: map_at_3
value: 35.69
- type: map_at_5
value: 37.605
- type: mrr_at_1
value: 33.302
- type: mrr_at_10
value: 42.986000000000004
- type: mrr_at_100
value: 43.957
- type: mrr_at_1000
value: 43.996
- type: mrr_at_3
value: 40.111999999999995
- type: mrr_at_5
value: 41.735
- type: ndcg_at_1
value: 33.302
- type: ndcg_at_10
value: 44.962999999999994
- type: ndcg_at_100
value: 50.917
- type: ndcg_at_1000
value: 52.622
- type: ndcg_at_3
value: 39.182
- type: ndcg_at_5
value: 41.939
- type: precision_at_1
value: 33.302
- type: precision_at_10
value: 7.779999999999999
- type: precision_at_100
value: 1.203
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 18.035
- type: precision_at_5
value: 12.873000000000001
- type: recall_at_1
value: 28.355999999999998
- type: recall_at_10
value: 58.782000000000004
- type: recall_at_100
value: 84.02199999999999
- type: recall_at_1000
value: 95.511
- type: recall_at_3
value: 43.126999999999995
- type: recall_at_5
value: 50.14999999999999
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackWebmastersRetrieval
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 27.391
- type: map_at_10
value: 37.523
- type: map_at_100
value: 39.312000000000005
- type: map_at_1000
value: 39.54
- type: map_at_3
value: 34.231
- type: map_at_5
value: 36.062
- type: mrr_at_1
value: 32.016
- type: mrr_at_10
value: 41.747
- type: mrr_at_100
value: 42.812
- type: mrr_at_1000
value: 42.844
- type: mrr_at_3
value: 39.129999999999995
- type: mrr_at_5
value: 40.524
- type: ndcg_at_1
value: 32.016
- type: ndcg_at_10
value: 43.826
- type: ndcg_at_100
value: 50.373999999999995
- type: ndcg_at_1000
value: 52.318
- type: ndcg_at_3
value: 38.479
- type: ndcg_at_5
value: 40.944
- type: precision_at_1
value: 32.016
- type: precision_at_10
value: 8.280999999999999
- type: precision_at_100
value: 1.6760000000000002
- type: precision_at_1000
value: 0.25
- type: precision_at_3
value: 18.05
- type: precision_at_5
value: 13.083
- type: recall_at_1
value: 27.391
- type: recall_at_10
value: 56.928999999999995
- type: recall_at_100
value: 85.169
- type: recall_at_1000
value: 96.665
- type: recall_at_3
value: 42.264
- type: recall_at_5
value: 48.556
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackWordpressRetrieval
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 18.398
- type: map_at_10
value: 27.929
- type: map_at_100
value: 29.032999999999998
- type: map_at_1000
value: 29.126
- type: map_at_3
value: 25.070999999999998
- type: map_at_5
value: 26.583000000000002
- type: mrr_at_1
value: 19.963
- type: mrr_at_10
value: 29.997
- type: mrr_at_100
value: 30.9
- type: mrr_at_1000
value: 30.972
- type: mrr_at_3
value: 27.264
- type: mrr_at_5
value: 28.826
- type: ndcg_at_1
value: 19.963
- type: ndcg_at_10
value: 33.678999999999995
- type: ndcg_at_100
value: 38.931
- type: ndcg_at_1000
value: 41.379
- type: ndcg_at_3
value: 28.000000000000004
- type: ndcg_at_5
value: 30.637999999999998
- type: precision_at_1
value: 19.963
- type: precision_at_10
value: 5.7299999999999995
- type: precision_at_100
value: 0.902
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 12.631
- type: precision_at_5
value: 9.057
- type: recall_at_1
value: 18.398
- type: recall_at_10
value: 49.254
- type: recall_at_100
value: 73.182
- type: recall_at_1000
value: 91.637
- type: recall_at_3
value: 34.06
- type: recall_at_5
value: 40.416000000000004
task:
type: Retrieval
- dataset:
config: default
name: MTEB ClimateFEVER
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
split: test
type: mteb/climate-fever
metrics:
- type: map_at_1
value: 19.681
- type: map_at_10
value: 32.741
- type: map_at_100
value: 34.811
- type: map_at_1000
value: 35.003
- type: map_at_3
value: 27.697
- type: map_at_5
value: 30.372
- type: mrr_at_1
value: 44.951
- type: mrr_at_10
value: 56.34400000000001
- type: mrr_at_100
value: 56.961
- type: mrr_at_1000
value: 56.987
- type: mrr_at_3
value: 53.681
- type: mrr_at_5
value: 55.407
- type: ndcg_at_1
value: 44.951
- type: ndcg_at_10
value: 42.905
- type: ndcg_at_100
value: 49.95
- type: ndcg_at_1000
value: 52.917
- type: ndcg_at_3
value: 36.815
- type: ndcg_at_5
value: 38.817
- type: precision_at_1
value: 44.951
- type: precision_at_10
value: 12.989999999999998
- type: precision_at_100
value: 2.068
- type: precision_at_1000
value: 0.263
- type: precision_at_3
value: 27.275
- type: precision_at_5
value: 20.365
- type: recall_at_1
value: 19.681
- type: recall_at_10
value: 48.272999999999996
- type: recall_at_100
value: 71.87400000000001
- type: recall_at_1000
value: 87.929
- type: recall_at_3
value: 32.653999999999996
- type: recall_at_5
value: 39.364
task:
type: Retrieval
- dataset:
config: default
name: MTEB DBPedia
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
split: test
type: mteb/dbpedia
metrics:
- type: map_at_1
value: 10.231
- type: map_at_10
value: 22.338
- type: map_at_100
value: 31.927
- type: map_at_1000
value: 33.87
- type: map_at_3
value: 15.559999999999999
- type: map_at_5
value: 18.239
- type: mrr_at_1
value: 75.0
- type: mrr_at_10
value: 81.303
- type: mrr_at_100
value: 81.523
- type: mrr_at_1000
value: 81.53
- type: mrr_at_3
value: 80.083
- type: mrr_at_5
value: 80.758
- type: ndcg_at_1
value: 64.625
- type: ndcg_at_10
value: 48.687000000000005
- type: ndcg_at_100
value: 52.791
- type: ndcg_at_1000
value: 60.041999999999994
- type: ndcg_at_3
value: 53.757999999999996
- type: ndcg_at_5
value: 50.76500000000001
- type: precision_at_1
value: 75.0
- type: precision_at_10
value: 38.3
- type: precision_at_100
value: 12.025
- type: precision_at_1000
value: 2.3970000000000002
- type: precision_at_3
value: 55.417
- type: precision_at_5
value: 47.5
- type: recall_at_1
value: 10.231
- type: recall_at_10
value: 27.697
- type: recall_at_100
value: 57.409
- type: recall_at_1000
value: 80.547
- type: recall_at_3
value: 16.668
- type: recall_at_5
value: 20.552
task:
type: Retrieval
- dataset:
config: default
name: MTEB EmotionClassification
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
split: test
type: mteb/emotion
metrics:
- type: accuracy
value: 61.365
- type: f1
value: 56.7540827912991
task:
type: Classification
- dataset:
config: default
name: MTEB FEVER
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
split: test
type: mteb/fever
metrics:
- type: map_at_1
value: 83.479
- type: map_at_10
value: 88.898
- type: map_at_100
value: 89.11
- type: map_at_1000
value: 89.12400000000001
- type: map_at_3
value: 88.103
- type: map_at_5
value: 88.629
- type: mrr_at_1
value: 89.934
- type: mrr_at_10
value: 93.91000000000001
- type: mrr_at_100
value: 93.937
- type: mrr_at_1000
value: 93.938
- type: mrr_at_3
value: 93.62700000000001
- type: mrr_at_5
value: 93.84599999999999
- type: ndcg_at_1
value: 89.934
- type: ndcg_at_10
value: 91.574
- type: ndcg_at_100
value: 92.238
- type: ndcg_at_1000
value: 92.45
- type: ndcg_at_3
value: 90.586
- type: ndcg_at_5
value: 91.16300000000001
- type: precision_at_1
value: 89.934
- type: precision_at_10
value: 10.555
- type: precision_at_100
value: 1.1159999999999999
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 33.588
- type: precision_at_5
value: 20.642
- type: recall_at_1
value: 83.479
- type: recall_at_10
value: 94.971
- type: recall_at_100
value: 97.397
- type: recall_at_1000
value: 98.666
- type: recall_at_3
value: 92.24799999999999
- type: recall_at_5
value: 93.797
task:
type: Retrieval
- dataset:
config: default
name: MTEB FiQA2018
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
split: test
type: mteb/fiqa
metrics:
- type: map_at_1
value: 27.16
- type: map_at_10
value: 45.593
- type: map_at_100
value: 47.762
- type: map_at_1000
value: 47.899
- type: map_at_3
value: 39.237
- type: map_at_5
value: 42.970000000000006
- type: mrr_at_1
value: 52.623
- type: mrr_at_10
value: 62.637
- type: mrr_at_100
value: 63.169
- type: mrr_at_1000
value: 63.185
- type: mrr_at_3
value: 59.928000000000004
- type: mrr_at_5
value: 61.702999999999996
- type: ndcg_at_1
value: 52.623
- type: ndcg_at_10
value: 54.701
- type: ndcg_at_100
value: 61.263
- type: ndcg_at_1000
value: 63.134
- type: ndcg_at_3
value: 49.265
- type: ndcg_at_5
value: 51.665000000000006
- type: precision_at_1
value: 52.623
- type: precision_at_10
value: 15.185
- type: precision_at_100
value: 2.202
- type: precision_at_1000
value: 0.254
- type: precision_at_3
value: 32.767
- type: precision_at_5
value: 24.722
- type: recall_at_1
value: 27.16
- type: recall_at_10
value: 63.309000000000005
- type: recall_at_100
value: 86.722
- type: recall_at_1000
value: 97.505
- type: recall_at_3
value: 45.045
- type: recall_at_5
value: 54.02400000000001
task:
type: Retrieval
- dataset:
config: default
name: MTEB HotpotQA
revision: ab518f4d6fcca38d87c25209f94beba119d02014
split: test
type: mteb/hotpotqa
metrics:
- type: map_at_1
value: 42.573
- type: map_at_10
value: 59.373
- type: map_at_100
value: 60.292
- type: map_at_1000
value: 60.358999999999995
- type: map_at_3
value: 56.159000000000006
- type: map_at_5
value: 58.123999999999995
- type: mrr_at_1
value: 85.14500000000001
- type: mrr_at_10
value: 89.25999999999999
- type: mrr_at_100
value: 89.373
- type: mrr_at_1000
value: 89.377
- type: mrr_at_3
value: 88.618
- type: mrr_at_5
value: 89.036
- type: ndcg_at_1
value: 85.14500000000001
- type: ndcg_at_10
value: 68.95
- type: ndcg_at_100
value: 71.95
- type: ndcg_at_1000
value: 73.232
- type: ndcg_at_3
value: 64.546
- type: ndcg_at_5
value: 66.945
- type: precision_at_1
value: 85.14500000000001
- type: precision_at_10
value: 13.865
- type: precision_at_100
value: 1.619
- type: precision_at_1000
value: 0.179
- type: precision_at_3
value: 39.703
- type: precision_at_5
value: 25.718000000000004
- type: recall_at_1
value: 42.573
- type: recall_at_10
value: 69.325
- type: recall_at_100
value: 80.932
- type: recall_at_1000
value: 89.446
- type: recall_at_3
value: 59.553999999999995
- type: recall_at_5
value: 64.294
task:
type: Retrieval
- dataset:
config: default
name: MTEB ImdbClassification
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
split: test
type: mteb/imdb
metrics:
- type: accuracy
value: 95.8336
- type: ap
value: 93.78862962194073
- type: f1
value: 95.83192650728371
task:
type: Classification
- dataset:
config: default
name: MTEB MSMARCO
revision: c5a29a104738b98a9e76336939199e264163d4a0
split: dev
type: mteb/msmarco
metrics:
- type: map_at_1
value: 23.075000000000003
- type: map_at_10
value: 36.102000000000004
- type: map_at_100
value: 37.257
- type: map_at_1000
value: 37.3
- type: map_at_3
value: 32.144
- type: map_at_5
value: 34.359
- type: mrr_at_1
value: 23.711
- type: mrr_at_10
value: 36.671
- type: mrr_at_100
value: 37.763999999999996
- type: mrr_at_1000
value: 37.801
- type: mrr_at_3
value: 32.775
- type: mrr_at_5
value: 34.977000000000004
- type: ndcg_at_1
value: 23.711
- type: ndcg_at_10
value: 43.361
- type: ndcg_at_100
value: 48.839
- type: ndcg_at_1000
value: 49.88
- type: ndcg_at_3
value: 35.269
- type: ndcg_at_5
value: 39.224
- type: precision_at_1
value: 23.711
- type: precision_at_10
value: 6.866999999999999
- type: precision_at_100
value: 0.96
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 15.096000000000002
- type: precision_at_5
value: 11.083
- type: recall_at_1
value: 23.075000000000003
- type: recall_at_10
value: 65.756
- type: recall_at_100
value: 90.88199999999999
- type: recall_at_1000
value: 98.739
- type: recall_at_3
value: 43.691
- type: recall_at_5
value: 53.15800000000001
task:
type: Retrieval
- dataset:
config: en
name: MTEB MTOPDomainClassification (en)
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
split: test
type: mteb/mtop_domain
metrics:
- type: accuracy
value: 97.69493844049248
- type: f1
value: 97.55048089616261
task:
type: Classification
- dataset:
config: en
name: MTEB MTOPIntentClassification (en)
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
split: test
type: mteb/mtop_intent
metrics:
- type: accuracy
value: 88.75968992248062
- type: f1
value: 72.26321223399123
task:
type: Classification
- dataset:
config: en
name: MTEB MassiveIntentClassification (en)
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 82.40080699394754
- type: f1
value: 79.62590029057968
task:
type: Classification
- dataset:
config: en
name: MTEB MassiveScenarioClassification (en)
revision: 7d571f92784cd94a019292a1f45445077d0ef634
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 84.49562878278414
- type: f1
value: 84.0040193313333
task:
type: Classification
- dataset:
config: default
name: MTEB MedrxivClusteringP2P
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
split: test
type: mteb/medrxiv-clustering-p2p
metrics:
- type: v_measure
value: 39.386760057101945
task:
type: Clustering
- dataset:
config: default
name: MTEB MedrxivClusteringS2S
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
split: test
type: mteb/medrxiv-clustering-s2s
metrics:
- type: v_measure
value: 37.89687154075537
task:
type: Clustering
- dataset:
config: default
name: MTEB MindSmallReranking
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
split: test
type: mteb/mind_small
metrics:
- type: map
value: 33.94151656057482
- type: mrr
value: 35.32684700746953
task:
type: Reranking
- dataset:
config: default
name: MTEB NFCorpus
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
split: test
type: mteb/nfcorpus
metrics:
- type: map_at_1
value: 6.239999999999999
- type: map_at_10
value: 14.862
- type: map_at_100
value: 18.955
- type: map_at_1000
value: 20.694000000000003
- type: map_at_3
value: 10.683
- type: map_at_5
value: 12.674
- type: mrr_at_1
value: 50.15500000000001
- type: mrr_at_10
value: 59.697
- type: mrr_at_100
value: 60.095
- type: mrr_at_1000
value: 60.129999999999995
- type: mrr_at_3
value: 58.35900000000001
- type: mrr_at_5
value: 58.839
- type: ndcg_at_1
value: 48.452
- type: ndcg_at_10
value: 39.341
- type: ndcg_at_100
value: 35.866
- type: ndcg_at_1000
value: 45.111000000000004
- type: ndcg_at_3
value: 44.527
- type: ndcg_at_5
value: 42.946
- type: precision_at_1
value: 50.15500000000001
- type: precision_at_10
value: 29.536
- type: precision_at_100
value: 9.142
- type: precision_at_1000
value: 2.2849999999999997
- type: precision_at_3
value: 41.899
- type: precision_at_5
value: 37.647000000000006
- type: recall_at_1
value: 6.239999999999999
- type: recall_at_10
value: 19.278000000000002
- type: recall_at_100
value: 36.074
- type: recall_at_1000
value: 70.017
- type: recall_at_3
value: 12.066
- type: recall_at_5
value: 15.254000000000001
task:
type: Retrieval
- dataset:
config: default
name: MTEB NQ
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
split: test
type: mteb/nq
metrics:
- type: map_at_1
value: 39.75
- type: map_at_10
value: 56.443
- type: map_at_100
value: 57.233999999999995
- type: map_at_1000
value: 57.249
- type: map_at_3
value: 52.032999999999994
- type: map_at_5
value: 54.937999999999995
- type: mrr_at_1
value: 44.728
- type: mrr_at_10
value: 58.939
- type: mrr_at_100
value: 59.489000000000004
- type: mrr_at_1000
value: 59.499
- type: mrr_at_3
value: 55.711999999999996
- type: mrr_at_5
value: 57.89
- type: ndcg_at_1
value: 44.728
- type: ndcg_at_10
value: 63.998999999999995
- type: ndcg_at_100
value: 67.077
- type: ndcg_at_1000
value: 67.40899999999999
- type: ndcg_at_3
value: 56.266000000000005
- type: ndcg_at_5
value: 60.88
- type: precision_at_1
value: 44.728
- type: precision_at_10
value: 10.09
- type: precision_at_100
value: 1.1809999999999998
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 25.145
- type: precision_at_5
value: 17.822
- type: recall_at_1
value: 39.75
- type: recall_at_10
value: 84.234
- type: recall_at_100
value: 97.055
- type: recall_at_1000
value: 99.517
- type: recall_at_3
value: 64.851
- type: recall_at_5
value: 75.343
task:
type: Retrieval
- dataset:
config: default
name: MTEB QuoraRetrieval
revision: None
split: test
type: mteb/quora
metrics:
- type: map_at_1
value: 72.085
- type: map_at_10
value: 86.107
- type: map_at_100
value: 86.727
- type: map_at_1000
value: 86.74
- type: map_at_3
value: 83.21
- type: map_at_5
value: 85.06
- type: mrr_at_1
value: 82.94
- type: mrr_at_10
value: 88.845
- type: mrr_at_100
value: 88.926
- type: mrr_at_1000
value: 88.927
- type: mrr_at_3
value: 87.993
- type: mrr_at_5
value: 88.62299999999999
- type: ndcg_at_1
value: 82.97
- type: ndcg_at_10
value: 89.645
- type: ndcg_at_100
value: 90.717
- type: ndcg_at_1000
value: 90.78
- type: ndcg_at_3
value: 86.99900000000001
- type: ndcg_at_5
value: 88.52600000000001
- type: precision_at_1
value: 82.97
- type: precision_at_10
value: 13.569
- type: precision_at_100
value: 1.539
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 38.043
- type: precision_at_5
value: 24.992
- type: recall_at_1
value: 72.085
- type: recall_at_10
value: 96.262
- type: recall_at_100
value: 99.77000000000001
- type: recall_at_1000
value: 99.997
- type: recall_at_3
value: 88.652
- type: recall_at_5
value: 93.01899999999999
task:
type: Retrieval
- dataset:
config: default
name: MTEB RedditClustering
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
split: test
type: mteb/reddit-clustering
metrics:
- type: v_measure
value: 55.82153952668092
task:
type: Clustering
- dataset:
config: default
name: MTEB RedditClusteringP2P
revision: 282350215ef01743dc01b456c7f5241fa8937f16
split: test
type: mteb/reddit-clustering-p2p
metrics:
- type: v_measure
value: 62.094465801879295
task:
type: Clustering
- dataset:
config: default
name: MTEB SCIDOCS
revision: None
split: test
type: mteb/scidocs
metrics:
- type: map_at_1
value: 5.688
- type: map_at_10
value: 15.201999999999998
- type: map_at_100
value: 18.096
- type: map_at_1000
value: 18.481
- type: map_at_3
value: 10.734
- type: map_at_5
value: 12.94
- type: mrr_at_1
value: 28.000000000000004
- type: mrr_at_10
value: 41.101
- type: mrr_at_100
value: 42.202
- type: mrr_at_1000
value: 42.228
- type: mrr_at_3
value: 37.683
- type: mrr_at_5
value: 39.708
- type: ndcg_at_1
value: 28.000000000000004
- type: ndcg_at_10
value: 24.976000000000003
- type: ndcg_at_100
value: 35.129
- type: ndcg_at_1000
value: 40.77
- type: ndcg_at_3
value: 23.787
- type: ndcg_at_5
value: 20.816000000000003
- type: precision_at_1
value: 28.000000000000004
- type: precision_at_10
value: 13.04
- type: precision_at_100
value: 2.761
- type: precision_at_1000
value: 0.41000000000000003
- type: precision_at_3
value: 22.6
- type: precision_at_5
value: 18.52
- type: recall_at_1
value: 5.688
- type: recall_at_10
value: 26.43
- type: recall_at_100
value: 56.02
- type: recall_at_1000
value: 83.21
- type: recall_at_3
value: 13.752
- type: recall_at_5
value: 18.777
task:
type: Retrieval
- dataset:
config: default
name: MTEB SICK-R
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
split: test
type: mteb/sickr-sts
metrics:
- type: cos_sim_pearson
value: 85.15084859283178
- type: cos_sim_spearman
value: 80.49030614009419
- type: euclidean_pearson
value: 81.84574978672468
- type: euclidean_spearman
value: 79.89787150656818
- type: manhattan_pearson
value: 81.63076538567131
- type: manhattan_spearman
value: 79.69867352121841
task:
type: STS
- dataset:
config: default
name: MTEB STS12
revision: a0d554a64d88156834ff5ae9920b964011b16384
split: test
type: mteb/sts12-sts
metrics:
- type: cos_sim_pearson
value: 84.64097921490992
- type: cos_sim_spearman
value: 77.25370084896514
- type: euclidean_pearson
value: 82.71210826468788
- type: euclidean_spearman
value: 78.50445584994826
- type: manhattan_pearson
value: 82.92580164330298
- type: manhattan_spearman
value: 78.69686891301019
task:
type: STS
- dataset:
config: default
name: MTEB STS13
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
split: test
type: mteb/sts13-sts
metrics:
- type: cos_sim_pearson
value: 87.24596417308994
- type: cos_sim_spearman
value: 87.79454220555091
- type: euclidean_pearson
value: 87.40242561671164
- type: euclidean_spearman
value: 88.25955597373556
- type: manhattan_pearson
value: 87.25160240485849
- type: manhattan_spearman
value: 88.155794979818
task:
type: STS
- dataset:
config: default
name: MTEB STS14
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
split: test
type: mteb/sts14-sts
metrics:
- type: cos_sim_pearson
value: 84.44914233422564
- type: cos_sim_spearman
value: 82.91015471820322
- type: euclidean_pearson
value: 84.7206656630327
- type: euclidean_spearman
value: 83.86408872059216
- type: manhattan_pearson
value: 84.72816725158454
- type: manhattan_spearman
value: 84.01603388572788
task:
type: STS
- dataset:
config: default
name: MTEB STS15
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
split: test
type: mteb/sts15-sts
metrics:
- type: cos_sim_pearson
value: 87.6168026237477
- type: cos_sim_spearman
value: 88.45414278092397
- type: euclidean_pearson
value: 88.57023240882022
- type: euclidean_spearman
value: 89.04102190922094
- type: manhattan_pearson
value: 88.66695535796354
- type: manhattan_spearman
value: 89.19898476680969
task:
type: STS
- dataset:
config: default
name: MTEB STS16
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
split: test
type: mteb/sts16-sts
metrics:
- type: cos_sim_pearson
value: 84.27925826089424
- type: cos_sim_spearman
value: 85.45291099550461
- type: euclidean_pearson
value: 83.63853036580834
- type: euclidean_spearman
value: 84.33468035821484
- type: manhattan_pearson
value: 83.72778773251596
- type: manhattan_spearman
value: 84.51583132445376
task:
type: STS
- dataset:
config: en-en
name: MTEB STS17 (en-en)
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
split: test
type: mteb/sts17-crosslingual-sts
metrics:
- type: cos_sim_pearson
value: 89.67375185692552
- type: cos_sim_spearman
value: 90.32542469203855
- type: euclidean_pearson
value: 89.63513717951847
- type: euclidean_spearman
value: 89.87760271003745
- type: manhattan_pearson
value: 89.28381452982924
- type: manhattan_spearman
value: 89.53568197785721
task:
type: STS
- dataset:
config: en
name: MTEB STS22 (en)
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cos_sim_pearson
value: 66.24644693819846
- type: cos_sim_spearman
value: 66.09889420525377
- type: euclidean_pearson
value: 63.72551583520747
- type: euclidean_spearman
value: 63.01385470780679
- type: manhattan_pearson
value: 64.09258157214097
- type: manhattan_spearman
value: 63.080517752822594
task:
type: STS
- dataset:
config: default
name: MTEB STSBenchmark
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
split: test
type: mteb/stsbenchmark-sts
metrics:
- type: cos_sim_pearson
value: 86.27321463839989
- type: cos_sim_spearman
value: 86.37572865993327
- type: euclidean_pearson
value: 86.36268020198149
- type: euclidean_spearman
value: 86.31089339478922
- type: manhattan_pearson
value: 86.4260445761947
- type: manhattan_spearman
value: 86.45885895320457
task:
type: STS
- dataset:
config: default
name: MTEB SciDocsRR
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
split: test
type: mteb/scidocs-reranking
metrics:
- type: map
value: 86.52456702387798
- type: mrr
value: 96.34556529164372
task:
type: Reranking
- dataset:
config: default
name: MTEB SciFact
revision: 0228b52cf27578f30900b9e5271d331663a030d7
split: test
type: mteb/scifact
metrics:
- type: map_at_1
value: 61.99400000000001
- type: map_at_10
value: 73.38799999999999
- type: map_at_100
value: 73.747
- type: map_at_1000
value: 73.75
- type: map_at_3
value: 70.04599999999999
- type: map_at_5
value: 72.095
- type: mrr_at_1
value: 65.0
- type: mrr_at_10
value: 74.42800000000001
- type: mrr_at_100
value: 74.722
- type: mrr_at_1000
value: 74.725
- type: mrr_at_3
value: 72.056
- type: mrr_at_5
value: 73.60600000000001
- type: ndcg_at_1
value: 65.0
- type: ndcg_at_10
value: 78.435
- type: ndcg_at_100
value: 79.922
- type: ndcg_at_1000
value: 80.00500000000001
- type: ndcg_at_3
value: 73.05199999999999
- type: ndcg_at_5
value: 75.98
- type: precision_at_1
value: 65.0
- type: precision_at_10
value: 10.5
- type: precision_at_100
value: 1.123
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 28.555999999999997
- type: precision_at_5
value: 19.0
- type: recall_at_1
value: 61.99400000000001
- type: recall_at_10
value: 92.72200000000001
- type: recall_at_100
value: 99.333
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 78.739
- type: recall_at_5
value: 85.828
task:
type: Retrieval
- dataset:
config: default
name: MTEB SprintDuplicateQuestions
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
split: test
type: mteb/sprintduplicatequestions-pairclassification
metrics:
- type: cos_sim_accuracy
value: 99.79009900990098
- type: cos_sim_ap
value: 95.3203137438653
- type: cos_sim_f1
value: 89.12386706948641
- type: cos_sim_precision
value: 89.75659229208925
- type: cos_sim_recall
value: 88.5
- type: dot_accuracy
value: 99.67821782178218
- type: dot_ap
value: 89.94069840000675
- type: dot_f1
value: 83.45902463549521
- type: dot_precision
value: 83.9231547017189
- type: dot_recall
value: 83.0
- type: euclidean_accuracy
value: 99.78613861386138
- type: euclidean_ap
value: 95.10648259135526
- type: euclidean_f1
value: 88.77338877338877
- type: euclidean_precision
value: 92.42424242424242
- type: euclidean_recall
value: 85.39999999999999
- type: manhattan_accuracy
value: 99.7950495049505
- type: manhattan_ap
value: 95.29987661320946
- type: manhattan_f1
value: 89.21313183949972
- type: manhattan_precision
value: 93.14472252448314
- type: manhattan_recall
value: 85.6
- type: max_accuracy
value: 99.7950495049505
- type: max_ap
value: 95.3203137438653
- type: max_f1
value: 89.21313183949972
task:
type: PairClassification
- dataset:
config: default
name: MTEB StackExchangeClustering
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
split: test
type: mteb/stackexchange-clustering
metrics:
- type: v_measure
value: 67.65446577183913
task:
type: Clustering
- dataset:
config: default
name: MTEB StackExchangeClusteringP2P
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
split: test
type: mteb/stackexchange-clustering-p2p
metrics:
- type: v_measure
value: 46.30749237193961
task:
type: Clustering
- dataset:
config: default
name: MTEB StackOverflowDupQuestions
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
split: test
type: mteb/stackoverflowdupquestions-reranking
metrics:
- type: map
value: 54.91481849959949
- type: mrr
value: 55.853506175197346
task:
type: Reranking
- dataset:
config: default
name: MTEB SummEval
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
split: test
type: mteb/summeval
metrics:
- type: cos_sim_pearson
value: 30.08196549170419
- type: cos_sim_spearman
value: 31.16661390597077
- type: dot_pearson
value: 29.892258410943466
- type: dot_spearman
value: 30.51328811965085
task:
type: Summarization
- dataset:
config: default
name: MTEB TRECCOVID
revision: None
split: test
type: mteb/trec-covid
metrics:
- type: map_at_1
value: 0.23900000000000002
- type: map_at_10
value: 2.173
- type: map_at_100
value: 14.24
- type: map_at_1000
value: 35.309000000000005
- type: map_at_3
value: 0.7100000000000001
- type: map_at_5
value: 1.163
- type: mrr_at_1
value: 92.0
- type: mrr_at_10
value: 96.0
- type: mrr_at_100
value: 96.0
- type: mrr_at_1000
value: 96.0
- type: mrr_at_3
value: 96.0
- type: mrr_at_5
value: 96.0
- type: ndcg_at_1
value: 90.0
- type: ndcg_at_10
value: 85.382
- type: ndcg_at_100
value: 68.03
- type: ndcg_at_1000
value: 61.021
- type: ndcg_at_3
value: 89.765
- type: ndcg_at_5
value: 88.444
- type: precision_at_1
value: 92.0
- type: precision_at_10
value: 88.0
- type: precision_at_100
value: 70.02000000000001
- type: precision_at_1000
value: 26.984
- type: precision_at_3
value: 94.0
- type: precision_at_5
value: 92.80000000000001
- type: recall_at_1
value: 0.23900000000000002
- type: recall_at_10
value: 2.313
- type: recall_at_100
value: 17.049
- type: recall_at_1000
value: 57.489999999999995
- type: recall_at_3
value: 0.737
- type: recall_at_5
value: 1.221
task:
type: Retrieval
- dataset:
config: default
name: MTEB Touche2020
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
split: test
type: mteb/touche2020
metrics:
- type: map_at_1
value: 2.75
- type: map_at_10
value: 11.29
- type: map_at_100
value: 18.032999999999998
- type: map_at_1000
value: 19.746
- type: map_at_3
value: 6.555
- type: map_at_5
value: 8.706999999999999
- type: mrr_at_1
value: 34.694
- type: mrr_at_10
value: 50.55
- type: mrr_at_100
value: 51.659
- type: mrr_at_1000
value: 51.659
- type: mrr_at_3
value: 47.278999999999996
- type: mrr_at_5
value: 49.728
- type: ndcg_at_1
value: 32.653
- type: ndcg_at_10
value: 27.894000000000002
- type: ndcg_at_100
value: 39.769
- type: ndcg_at_1000
value: 51.495999999999995
- type: ndcg_at_3
value: 32.954
- type: ndcg_at_5
value: 31.502999999999997
- type: precision_at_1
value: 34.694
- type: precision_at_10
value: 23.265
- type: precision_at_100
value: 7.898
- type: precision_at_1000
value: 1.58
- type: precision_at_3
value: 34.694
- type: precision_at_5
value: 31.429000000000002
- type: recall_at_1
value: 2.75
- type: recall_at_10
value: 16.953
- type: recall_at_100
value: 48.68
- type: recall_at_1000
value: 85.18599999999999
- type: recall_at_3
value: 7.710999999999999
- type: recall_at_5
value: 11.484
task:
type: Retrieval
- dataset:
config: default
name: MTEB ToxicConversationsClassification
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
split: test
type: mteb/toxic_conversations_50k
metrics:
- type: accuracy
value: 82.66099999999999
- type: ap
value: 25.555698090238337
- type: f1
value: 66.48402012461622
task:
type: Classification
- dataset:
config: default
name: MTEB TweetSentimentExtractionClassification
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
split: test
type: mteb/tweet_sentiment_extraction
metrics:
- type: accuracy
value: 72.94567062818335
- type: f1
value: 73.28139189595674
task:
type: Classification
- dataset:
config: default
name: MTEB TwentyNewsgroupsClustering
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
split: test
type: mteb/twentynewsgroups-clustering
metrics:
- type: v_measure
value: 49.581627240203474
task:
type: Clustering
- dataset:
config: default
name: MTEB TwitterSemEval2015
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
split: test
type: mteb/twittersemeval2015-pairclassification
metrics:
- type: cos_sim_accuracy
value: 87.78089050485785
- type: cos_sim_ap
value: 79.64487116574168
- type: cos_sim_f1
value: 72.46563021970964
- type: cos_sim_precision
value: 70.62359128474831
- type: cos_sim_recall
value: 74.40633245382587
- type: dot_accuracy
value: 86.2609524944865
- type: dot_ap
value: 75.513046857613
- type: dot_f1
value: 68.58213616489695
- type: dot_precision
value: 65.12455516014235
- type: dot_recall
value: 72.42744063324538
- type: euclidean_accuracy
value: 87.6080348095607
- type: euclidean_ap
value: 79.00204933649795
- type: euclidean_f1
value: 72.14495342605589
- type: euclidean_precision
value: 69.85421299728193
- type: euclidean_recall
value: 74.5910290237467
- type: manhattan_accuracy
value: 87.59611372712642
- type: manhattan_ap
value: 78.78523756706264
- type: manhattan_f1
value: 71.86499137718648
- type: manhattan_precision
value: 67.39833641404806
- type: manhattan_recall
value: 76.96569920844327
- type: max_accuracy
value: 87.78089050485785
- type: max_ap
value: 79.64487116574168
- type: max_f1
value: 72.46563021970964
task:
type: PairClassification
- dataset:
config: default
name: MTEB TwitterURLCorpus
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
split: test
type: mteb/twitterurlcorpus-pairclassification
metrics:
- type: cos_sim_accuracy
value: 89.98719292117825
- type: cos_sim_ap
value: 87.58146137353202
- type: cos_sim_f1
value: 80.28543232369239
- type: cos_sim_precision
value: 79.1735289714029
- type: cos_sim_recall
value: 81.42901139513397
- type: dot_accuracy
value: 88.9199363526992
- type: dot_ap
value: 84.98499998630417
- type: dot_f1
value: 78.21951400757969
- type: dot_precision
value: 75.58523624874336
- type: dot_recall
value: 81.04404065291038
- type: euclidean_accuracy
value: 89.77374160748244
- type: euclidean_ap
value: 87.35151562835209
- type: euclidean_f1
value: 79.92160922940393
- type: euclidean_precision
value: 76.88531587933979
- type: euclidean_recall
value: 83.20757622420696
- type: manhattan_accuracy
value: 89.72717041176699
- type: manhattan_ap
value: 87.34065592142515
- type: manhattan_f1
value: 79.85603419187943
- type: manhattan_precision
value: 77.82243332115455
- type: manhattan_recall
value: 81.99876809362489
- type: max_accuracy
value: 89.98719292117825
- type: max_ap
value: 87.58146137353202
- type: max_f1
value: 80.28543232369239
task:
type: PairClassification
- dataset:
config: default
name: MTEB AFQMC
revision: b44c3b011063adb25877c13823db83bb193913c4
split: validation
type: C-MTEB/AFQMC
metrics:
- type: cos_sim_pearson
value: 53.45954203592337
- type: cos_sim_spearman
value: 58.42154680418638
- type: euclidean_pearson
value: 56.41543791722753
- type: euclidean_spearman
value: 58.39328016640146
- type: manhattan_pearson
value: 56.318510356833876
- type: manhattan_spearman
value: 58.28423447818184
task:
type: STS
- dataset:
config: default
name: MTEB ATEC
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
split: test
type: C-MTEB/ATEC
metrics:
- type: cos_sim_pearson
value: 50.78356460675945
- type: cos_sim_spearman
value: 55.6530411663269
- type: euclidean_pearson
value: 56.50763660417816
- type: euclidean_spearman
value: 55.733823335669065
- type: manhattan_pearson
value: 56.45323093512866
- type: manhattan_spearman
value: 55.63248619032702
task:
type: STS
- dataset:
config: zh
name: MTEB AmazonReviewsClassification (zh)
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
split: test
type: mteb/amazon_reviews_multi
metrics:
- type: accuracy
value: 47.209999999999994
- type: f1
value: 46.08892432018655
task:
type: Classification
- dataset:
config: default
name: MTEB BQ
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
split: test
type: C-MTEB/BQ
metrics:
- type: cos_sim_pearson
value: 70.25573992001478
- type: cos_sim_spearman
value: 73.85247134951433
- type: euclidean_pearson
value: 72.60033082168442
- type: euclidean_spearman
value: 73.72445893756499
- type: manhattan_pearson
value: 72.59932284620231
- type: manhattan_spearman
value: 73.68002490614583
task:
type: STS
- dataset:
config: default
name: MTEB CLSClusteringP2P
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
split: test
type: C-MTEB/CLSClusteringP2P
metrics:
- type: v_measure
value: 45.21317724305628
task:
type: Clustering
- dataset:
config: default
name: MTEB CLSClusteringS2S
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
split: test
type: C-MTEB/CLSClusteringS2S
metrics:
- type: v_measure
value: 42.49825170976724
task:
type: Clustering
- dataset:
config: default
name: MTEB CMedQAv1
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
split: test
type: C-MTEB/CMedQAv1-reranking
metrics:
- type: map
value: 88.15661686810597
- type: mrr
value: 90.11222222222223
task:
type: Reranking
- dataset:
config: default
name: MTEB CMedQAv2
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
split: test
type: C-MTEB/CMedQAv2-reranking
metrics:
- type: map
value: 88.1204726064383
- type: mrr
value: 90.20142857142858
task:
type: Reranking
- dataset:
config: default
name: MTEB CmedqaRetrieval
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
split: dev
type: C-MTEB/CmedqaRetrieval
metrics:
- type: map_at_1
value: 27.224999999999998
- type: map_at_10
value: 40.169
- type: map_at_100
value: 42.0
- type: map_at_1000
value: 42.109
- type: map_at_3
value: 35.76
- type: map_at_5
value: 38.221
- type: mrr_at_1
value: 40.56
- type: mrr_at_10
value: 49.118
- type: mrr_at_100
value: 50.092999999999996
- type: mrr_at_1000
value: 50.133
- type: mrr_at_3
value: 46.507
- type: mrr_at_5
value: 47.973
- type: ndcg_at_1
value: 40.56
- type: ndcg_at_10
value: 46.972
- type: ndcg_at_100
value: 54.04
- type: ndcg_at_1000
value: 55.862
- type: ndcg_at_3
value: 41.36
- type: ndcg_at_5
value: 43.704
- type: precision_at_1
value: 40.56
- type: precision_at_10
value: 10.302999999999999
- type: precision_at_100
value: 1.606
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 23.064
- type: precision_at_5
value: 16.764000000000003
- type: recall_at_1
value: 27.224999999999998
- type: recall_at_10
value: 58.05200000000001
- type: recall_at_100
value: 87.092
- type: recall_at_1000
value: 99.099
- type: recall_at_3
value: 41.373
- type: recall_at_5
value: 48.453
task:
type: Retrieval
- dataset:
config: default
name: MTEB Cmnli
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
split: validation
type: C-MTEB/CMNLI
metrics:
- type: cos_sim_accuracy
value: 77.40228502705953
- type: cos_sim_ap
value: 86.22359172956327
- type: cos_sim_f1
value: 78.96328293736501
- type: cos_sim_precision
value: 73.36945615091311
- type: cos_sim_recall
value: 85.48047696983868
- type: dot_accuracy
value: 75.53818400481059
- type: dot_ap
value: 83.70164011305312
- type: dot_f1
value: 77.67298719348754
- type: dot_precision
value: 67.49482401656314
- type: dot_recall
value: 91.46598082768296
- type: euclidean_accuracy
value: 77.94347564642213
- type: euclidean_ap
value: 86.4652108728609
- type: euclidean_f1
value: 79.15555555555555
- type: euclidean_precision
value: 75.41816641964853
- type: euclidean_recall
value: 83.28267477203647
- type: manhattan_accuracy
value: 77.45039085989175
- type: manhattan_ap
value: 86.09986583900665
- type: manhattan_f1
value: 78.93669264438988
- type: manhattan_precision
value: 72.63261296660117
- type: manhattan_recall
value: 86.43909282207154
- type: max_accuracy
value: 77.94347564642213
- type: max_ap
value: 86.4652108728609
- type: max_f1
value: 79.15555555555555
task:
type: PairClassification
- dataset:
config: default
name: MTEB CovidRetrieval
revision: 1271c7809071a13532e05f25fb53511ffce77117
split: dev
type: C-MTEB/CovidRetrieval
metrics:
- type: map_at_1
value: 69.336
- type: map_at_10
value: 77.16
- type: map_at_100
value: 77.47500000000001
- type: map_at_1000
value: 77.482
- type: map_at_3
value: 75.42999999999999
- type: map_at_5
value: 76.468
- type: mrr_at_1
value: 69.44200000000001
- type: mrr_at_10
value: 77.132
- type: mrr_at_100
value: 77.43299999999999
- type: mrr_at_1000
value: 77.44
- type: mrr_at_3
value: 75.395
- type: mrr_at_5
value: 76.459
- type: ndcg_at_1
value: 69.547
- type: ndcg_at_10
value: 80.794
- type: ndcg_at_100
value: 82.245
- type: ndcg_at_1000
value: 82.40899999999999
- type: ndcg_at_3
value: 77.303
- type: ndcg_at_5
value: 79.168
- type: precision_at_1
value: 69.547
- type: precision_at_10
value: 9.305
- type: precision_at_100
value: 0.9979999999999999
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 27.749000000000002
- type: precision_at_5
value: 17.576
- type: recall_at_1
value: 69.336
- type: recall_at_10
value: 92.097
- type: recall_at_100
value: 98.736
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 82.64
- type: recall_at_5
value: 87.144
task:
type: Retrieval
- dataset:
config: default
name: MTEB DuRetrieval
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
split: dev
type: C-MTEB/DuRetrieval
metrics:
- type: map_at_1
value: 26.817999999999998
- type: map_at_10
value: 82.67
- type: map_at_100
value: 85.304
- type: map_at_1000
value: 85.334
- type: map_at_3
value: 57.336
- type: map_at_5
value: 72.474
- type: mrr_at_1
value: 91.45
- type: mrr_at_10
value: 94.272
- type: mrr_at_100
value: 94.318
- type: mrr_at_1000
value: 94.32000000000001
- type: mrr_at_3
value: 94.0
- type: mrr_at_5
value: 94.17699999999999
- type: ndcg_at_1
value: 91.45
- type: ndcg_at_10
value: 89.404
- type: ndcg_at_100
value: 91.724
- type: ndcg_at_1000
value: 91.973
- type: ndcg_at_3
value: 88.104
- type: ndcg_at_5
value: 87.25699999999999
- type: precision_at_1
value: 91.45
- type: precision_at_10
value: 42.585
- type: precision_at_100
value: 4.838
- type: precision_at_1000
value: 0.49
- type: precision_at_3
value: 78.8
- type: precision_at_5
value: 66.66
- type: recall_at_1
value: 26.817999999999998
- type: recall_at_10
value: 90.67
- type: recall_at_100
value: 98.36200000000001
- type: recall_at_1000
value: 99.583
- type: recall_at_3
value: 59.614999999999995
- type: recall_at_5
value: 77.05199999999999
task:
type: Retrieval
- dataset:
config: default
name: MTEB EcomRetrieval
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
split: dev
type: C-MTEB/EcomRetrieval
metrics:
- type: map_at_1
value: 47.699999999999996
- type: map_at_10
value: 57.589999999999996
- type: map_at_100
value: 58.226
- type: map_at_1000
value: 58.251
- type: map_at_3
value: 55.233
- type: map_at_5
value: 56.633
- type: mrr_at_1
value: 47.699999999999996
- type: mrr_at_10
value: 57.589999999999996
- type: mrr_at_100
value: 58.226
- type: mrr_at_1000
value: 58.251
- type: mrr_at_3
value: 55.233
- type: mrr_at_5
value: 56.633
- type: ndcg_at_1
value: 47.699999999999996
- type: ndcg_at_10
value: 62.505
- type: ndcg_at_100
value: 65.517
- type: ndcg_at_1000
value: 66.19800000000001
- type: ndcg_at_3
value: 57.643
- type: ndcg_at_5
value: 60.181
- type: precision_at_1
value: 47.699999999999996
- type: precision_at_10
value: 7.8
- type: precision_at_100
value: 0.919
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 21.532999999999998
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 47.699999999999996
- type: recall_at_10
value: 78.0
- type: recall_at_100
value: 91.9
- type: recall_at_1000
value: 97.3
- type: recall_at_3
value: 64.60000000000001
- type: recall_at_5
value: 70.8
task:
type: Retrieval
- dataset:
config: default
name: MTEB IFlyTek
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
split: validation
type: C-MTEB/IFlyTek-classification
metrics:
- type: accuracy
value: 44.84801846864178
- type: f1
value: 37.47347897956339
task:
type: Classification
- dataset:
config: default
name: MTEB JDReview
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
split: test
type: C-MTEB/JDReview-classification
metrics:
- type: accuracy
value: 85.81613508442777
- type: ap
value: 52.68244615477374
- type: f1
value: 80.0445640948843
task:
type: Classification
- dataset:
config: default
name: MTEB LCQMC
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
split: test
type: C-MTEB/LCQMC
metrics:
- type: cos_sim_pearson
value: 69.57786502217138
- type: cos_sim_spearman
value: 75.39106054489906
- type: euclidean_pearson
value: 73.72082954602402
- type: euclidean_spearman
value: 75.14421475913619
- type: manhattan_pearson
value: 73.62463076633642
- type: manhattan_spearman
value: 75.01301565104112
task:
type: STS
- dataset:
config: default
name: MTEB MMarcoReranking
revision: None
split: dev
type: C-MTEB/Mmarco-reranking
metrics:
- type: map
value: 29.143797057999134
- type: mrr
value: 28.08174603174603
task:
type: Reranking
- dataset:
config: default
name: MTEB MMarcoRetrieval
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
split: dev
type: C-MTEB/MMarcoRetrieval
metrics:
- type: map_at_1
value: 70.492
- type: map_at_10
value: 79.501
- type: map_at_100
value: 79.728
- type: map_at_1000
value: 79.735
- type: map_at_3
value: 77.77
- type: map_at_5
value: 78.851
- type: mrr_at_1
value: 72.822
- type: mrr_at_10
value: 80.001
- type: mrr_at_100
value: 80.19
- type: mrr_at_1000
value: 80.197
- type: mrr_at_3
value: 78.484
- type: mrr_at_5
value: 79.42099999999999
- type: ndcg_at_1
value: 72.822
- type: ndcg_at_10
value: 83.013
- type: ndcg_at_100
value: 84.013
- type: ndcg_at_1000
value: 84.20400000000001
- type: ndcg_at_3
value: 79.728
- type: ndcg_at_5
value: 81.542
- type: precision_at_1
value: 72.822
- type: precision_at_10
value: 9.917
- type: precision_at_100
value: 1.042
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 29.847
- type: precision_at_5
value: 18.871
- type: recall_at_1
value: 70.492
- type: recall_at_10
value: 93.325
- type: recall_at_100
value: 97.822
- type: recall_at_1000
value: 99.319
- type: recall_at_3
value: 84.636
- type: recall_at_5
value: 88.93100000000001
task:
type: Retrieval
- dataset:
config: zh-CN
name: MTEB MassiveIntentClassification (zh-CN)
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 76.88298587760592
- type: f1
value: 73.89001762017176
task:
type: Classification
- dataset:
config: zh-CN
name: MTEB MassiveScenarioClassification (zh-CN)
revision: 7d571f92784cd94a019292a1f45445077d0ef634
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 80.76328177538669
- type: f1
value: 80.24718532423358
task:
type: Classification
- dataset:
config: default
name: MTEB MedicalRetrieval
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
split: dev
type: C-MTEB/MedicalRetrieval
metrics:
- type: map_at_1
value: 49.6
- type: map_at_10
value: 55.620999999999995
- type: map_at_100
value: 56.204
- type: map_at_1000
value: 56.251
- type: map_at_3
value: 54.132999999999996
- type: map_at_5
value: 54.933
- type: mrr_at_1
value: 49.7
- type: mrr_at_10
value: 55.67100000000001
- type: mrr_at_100
value: 56.254000000000005
- type: mrr_at_1000
value: 56.301
- type: mrr_at_3
value: 54.18300000000001
- type: mrr_at_5
value: 54.983000000000004
- type: ndcg_at_1
value: 49.6
- type: ndcg_at_10
value: 58.645
- type: ndcg_at_100
value: 61.789
- type: ndcg_at_1000
value: 63.219
- type: ndcg_at_3
value: 55.567
- type: ndcg_at_5
value: 57.008
- type: precision_at_1
value: 49.6
- type: precision_at_10
value: 6.819999999999999
- type: precision_at_100
value: 0.836
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 19.900000000000002
- type: precision_at_5
value: 12.64
- type: recall_at_1
value: 49.6
- type: recall_at_10
value: 68.2
- type: recall_at_100
value: 83.6
- type: recall_at_1000
value: 95.3
- type: recall_at_3
value: 59.699999999999996
- type: recall_at_5
value: 63.2
task:
type: Retrieval
- dataset:
config: default
name: MTEB MultilingualSentiment
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
split: validation
type: C-MTEB/MultilingualSentiment-classification
metrics:
- type: accuracy
value: 74.45666666666666
- type: f1
value: 74.32582402190089
task:
type: Classification
- dataset:
config: default
name: MTEB Ocnli
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
split: validation
type: C-MTEB/OCNLI
metrics:
- type: cos_sim_accuracy
value: 80.67135896047645
- type: cos_sim_ap
value: 87.60421240712051
- type: cos_sim_f1
value: 82.1304131408661
- type: cos_sim_precision
value: 77.68361581920904
- type: cos_sim_recall
value: 87.11721224920802
- type: dot_accuracy
value: 79.04710341093666
- type: dot_ap
value: 85.6370059719336
- type: dot_f1
value: 80.763723150358
- type: dot_precision
value: 73.69337979094077
- type: dot_recall
value: 89.33474128827878
- type: euclidean_accuracy
value: 81.05035192203573
- type: euclidean_ap
value: 87.7880240053663
- type: euclidean_f1
value: 82.50244379276637
- type: euclidean_precision
value: 76.7970882620564
- type: euclidean_recall
value: 89.1235480464625
- type: manhattan_accuracy
value: 80.61721710882512
- type: manhattan_ap
value: 87.43568120591175
- type: manhattan_f1
value: 81.89526184538653
- type: manhattan_precision
value: 77.5992438563327
- type: manhattan_recall
value: 86.6948257655755
- type: max_accuracy
value: 81.05035192203573
- type: max_ap
value: 87.7880240053663
- type: max_f1
value: 82.50244379276637
task:
type: PairClassification
- dataset:
config: default
name: MTEB OnlineShopping
revision: e610f2ebd179a8fda30ae534c3878750a96db120
split: test
type: C-MTEB/OnlineShopping-classification
metrics:
- type: accuracy
value: 93.5
- type: ap
value: 91.31357903446782
- type: f1
value: 93.48088994006616
task:
type: Classification
- dataset:
config: default
name: MTEB PAWSX
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
split: test
type: C-MTEB/PAWSX
metrics:
- type: cos_sim_pearson
value: 36.93293453538077
- type: cos_sim_spearman
value: 42.45972506308574
- type: euclidean_pearson
value: 42.34945133152159
- type: euclidean_spearman
value: 42.331610303674644
- type: manhattan_pearson
value: 42.31455070249498
- type: manhattan_spearman
value: 42.19887982891834
task:
type: STS
- dataset:
config: default
name: MTEB QBQTC
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
split: test
type: C-MTEB/QBQTC
metrics:
- type: cos_sim_pearson
value: 33.683290790043785
- type: cos_sim_spearman
value: 35.149171171202994
- type: euclidean_pearson
value: 32.33806561267862
- type: euclidean_spearman
value: 34.483576387347966
- type: manhattan_pearson
value: 32.47629754599608
- type: manhattan_spearman
value: 34.66434471867615
task:
type: STS
- dataset:
config: zh
name: MTEB STS22 (zh)
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cos_sim_pearson
value: 66.46322760516104
- type: cos_sim_spearman
value: 67.398478319726
- type: euclidean_pearson
value: 64.7223480293625
- type: euclidean_spearman
value: 66.83118568812951
- type: manhattan_pearson
value: 64.88440039828305
- type: manhattan_spearman
value: 66.80429458952257
task:
type: STS
- dataset:
config: default
name: MTEB STSB
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
split: test
type: C-MTEB/STSB
metrics:
- type: cos_sim_pearson
value: 79.08991383232105
- type: cos_sim_spearman
value: 79.39715677296854
- type: euclidean_pearson
value: 78.63201279320496
- type: euclidean_spearman
value: 79.40262660785731
- type: manhattan_pearson
value: 78.98138363146906
- type: manhattan_spearman
value: 79.79968413014194
task:
type: STS
- dataset:
config: default
name: MTEB T2Reranking
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
split: dev
type: C-MTEB/T2Reranking
metrics:
- type: map
value: 67.43289278789972
- type: mrr
value: 77.53012460908535
task:
type: Reranking
- dataset:
config: default
name: MTEB T2Retrieval
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
split: dev
type: C-MTEB/T2Retrieval
metrics:
- type: map_at_1
value: 27.733999999999998
- type: map_at_10
value: 78.24799999999999
- type: map_at_100
value: 81.765
- type: map_at_1000
value: 81.824
- type: map_at_3
value: 54.92
- type: map_at_5
value: 67.61399999999999
- type: mrr_at_1
value: 90.527
- type: mrr_at_10
value: 92.843
- type: mrr_at_100
value: 92.927
- type: mrr_at_1000
value: 92.93
- type: mrr_at_3
value: 92.45100000000001
- type: mrr_at_5
value: 92.693
- type: ndcg_at_1
value: 90.527
- type: ndcg_at_10
value: 85.466
- type: ndcg_at_100
value: 88.846
- type: ndcg_at_1000
value: 89.415
- type: ndcg_at_3
value: 86.768
- type: ndcg_at_5
value: 85.46000000000001
- type: precision_at_1
value: 90.527
- type: precision_at_10
value: 42.488
- type: precision_at_100
value: 5.024
- type: precision_at_1000
value: 0.516
- type: precision_at_3
value: 75.907
- type: precision_at_5
value: 63.727000000000004
- type: recall_at_1
value: 27.733999999999998
- type: recall_at_10
value: 84.346
- type: recall_at_100
value: 95.536
- type: recall_at_1000
value: 98.42999999999999
- type: recall_at_3
value: 56.455
- type: recall_at_5
value: 70.755
task:
type: Retrieval
- dataset:
config: default
name: MTEB TNews
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
split: validation
type: C-MTEB/TNews-classification
metrics:
- type: accuracy
value: 49.952000000000005
- type: f1
value: 48.264617195258054
task:
type: Classification
- dataset:
config: default
name: MTEB ThuNewsClusteringP2P
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
split: test
type: C-MTEB/ThuNewsClusteringP2P
metrics:
- type: v_measure
value: 68.23769904483508
task:
type: Clustering
- dataset:
config: default
name: MTEB ThuNewsClusteringS2S
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
split: test
type: C-MTEB/ThuNewsClusteringS2S
metrics:
- type: v_measure
value: 62.50294403136556
task:
type: Clustering
- dataset:
config: default
name: MTEB VideoRetrieval
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
split: dev
type: C-MTEB/VideoRetrieval
metrics:
- type: map_at_1
value: 54.0
- type: map_at_10
value: 63.668
- type: map_at_100
value: 64.217
- type: map_at_1000
value: 64.23100000000001
- type: map_at_3
value: 61.7
- type: map_at_5
value: 62.870000000000005
- type: mrr_at_1
value: 54.0
- type: mrr_at_10
value: 63.668
- type: mrr_at_100
value: 64.217
- type: mrr_at_1000
value: 64.23100000000001
- type: mrr_at_3
value: 61.7
- type: mrr_at_5
value: 62.870000000000005
- type: ndcg_at_1
value: 54.0
- type: ndcg_at_10
value: 68.11399999999999
- type: ndcg_at_100
value: 70.723
- type: ndcg_at_1000
value: 71.123
- type: ndcg_at_3
value: 64.074
- type: ndcg_at_5
value: 66.178
- type: precision_at_1
value: 54.0
- type: precision_at_10
value: 8.200000000000001
- type: precision_at_100
value: 0.941
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 23.633000000000003
- type: precision_at_5
value: 15.2
- type: recall_at_1
value: 54.0
- type: recall_at_10
value: 82.0
- type: recall_at_100
value: 94.1
- type: recall_at_1000
value: 97.3
- type: recall_at_3
value: 70.89999999999999
- type: recall_at_5
value: 76.0
task:
type: Retrieval
- dataset:
config: default
name: MTEB Waimai
revision: 339287def212450dcaa9df8c22bf93e9980c7023
split: test
type: C-MTEB/waimai-classification
metrics:
- type: accuracy
value: 86.63000000000001
- type: ap
value: 69.99457882599567
- type: f1
value: 85.07735617998541
task:
type: Classification
- dataset:
config: default
name: MTEB 8TagsClustering
revision: None
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 44.594104491193555
task:
type: Clustering
- dataset:
config: default
name: MTEB AllegroReviews
revision: None
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 63.97614314115309
- type: f1
value: 52.15634261679283
task:
type: Classification
- dataset:
config: default
name: MTEB ArguAna-PL
revision: 63fc86750af76253e8c760fc9e534bbf24d260a2
split: test
type: clarin-knext/arguana-pl
metrics:
- type: map_at_1
value: 32.646
- type: map_at_10
value: 47.963
- type: map_at_100
value: 48.789
- type: map_at_1000
value: 48.797000000000004
- type: map_at_3
value: 43.196
- type: map_at_5
value: 46.016
- type: mrr_at_1
value: 33.073
- type: mrr_at_10
value: 48.126000000000005
- type: mrr_at_100
value: 48.946
- type: mrr_at_1000
value: 48.953
- type: mrr_at_3
value: 43.374
- type: mrr_at_5
value: 46.147
- type: ndcg_at_1
value: 32.646
- type: ndcg_at_10
value: 56.481
- type: ndcg_at_100
value: 59.922
- type: ndcg_at_1000
value: 60.07
- type: ndcg_at_3
value: 46.675
- type: ndcg_at_5
value: 51.76500000000001
- type: precision_at_1
value: 32.646
- type: precision_at_10
value: 8.371
- type: precision_at_100
value: 0.9860000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 18.919
- type: precision_at_5
value: 13.825999999999999
- type: recall_at_1
value: 32.646
- type: recall_at_10
value: 83.71300000000001
- type: recall_at_100
value: 98.578
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 56.757000000000005
- type: recall_at_5
value: 69.132
task:
type: Retrieval
- dataset:
config: default
name: MTEB CBD
revision: None
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 68.56
- type: ap
value: 23.310493680488513
- type: f1
value: 58.85369533105693
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: None
split: test
type: PL-MTEB/cdsce-pairclassification
metrics:
- type: cos_sim_accuracy
value: 88.5
- type: cos_sim_ap
value: 72.42140924378361
- type: cos_sim_f1
value: 66.0919540229885
- type: cos_sim_precision
value: 72.78481012658227
- type: cos_sim_recall
value: 60.526315789473685
- type: dot_accuracy
value: 88.5
- type: dot_ap
value: 72.42140924378361
- type: dot_f1
value: 66.0919540229885
- type: dot_precision
value: 72.78481012658227
- type: dot_recall
value: 60.526315789473685
- type: euclidean_accuracy
value: 88.5
- type: euclidean_ap
value: 72.42140924378361
- type: euclidean_f1
value: 66.0919540229885
- type: euclidean_precision
value: 72.78481012658227
- type: euclidean_recall
value: 60.526315789473685
- type: manhattan_accuracy
value: 88.5
- type: manhattan_ap
value: 72.49745515311696
- type: manhattan_f1
value: 66.0968660968661
- type: manhattan_precision
value: 72.04968944099379
- type: manhattan_recall
value: 61.05263157894737
- type: max_accuracy
value: 88.5
- type: max_ap
value: 72.49745515311696
- type: max_f1
value: 66.0968660968661
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: None
split: test
type: PL-MTEB/cdscr-sts
metrics:
- type: cos_sim_pearson
value: 90.32269765590145
- type: cos_sim_spearman
value: 89.73666311491672
- type: euclidean_pearson
value: 88.2933868516544
- type: euclidean_spearman
value: 89.73666311491672
- type: manhattan_pearson
value: 88.33474590219448
- type: manhattan_spearman
value: 89.8548364866583
task:
type: STS
- dataset:
config: default
name: MTEB DBPedia-PL
revision: 76afe41d9af165cc40999fcaa92312b8b012064a
split: test
type: clarin-knext/dbpedia-pl
metrics:
- type: map_at_1
value: 7.632999999999999
- type: map_at_10
value: 16.426
- type: map_at_100
value: 22.651
- type: map_at_1000
value: 24.372
- type: map_at_3
value: 11.706
- type: map_at_5
value: 13.529
- type: mrr_at_1
value: 60.75000000000001
- type: mrr_at_10
value: 68.613
- type: mrr_at_100
value: 69.001
- type: mrr_at_1000
value: 69.021
- type: mrr_at_3
value: 67.0
- type: mrr_at_5
value: 67.925
- type: ndcg_at_1
value: 49.875
- type: ndcg_at_10
value: 36.978
- type: ndcg_at_100
value: 40.031
- type: ndcg_at_1000
value: 47.566
- type: ndcg_at_3
value: 41.148
- type: ndcg_at_5
value: 38.702
- type: precision_at_1
value: 60.75000000000001
- type: precision_at_10
value: 29.7
- type: precision_at_100
value: 9.278
- type: precision_at_1000
value: 2.099
- type: precision_at_3
value: 44.0
- type: precision_at_5
value: 37.6
- type: recall_at_1
value: 7.632999999999999
- type: recall_at_10
value: 22.040000000000003
- type: recall_at_100
value: 44.024
- type: recall_at_1000
value: 67.848
- type: recall_at_3
value: 13.093
- type: recall_at_5
value: 15.973
task:
type: Retrieval
- dataset:
config: default
name: MTEB FiQA-PL
revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e
split: test
type: clarin-knext/fiqa-pl
metrics:
- type: map_at_1
value: 15.473
- type: map_at_10
value: 24.579
- type: map_at_100
value: 26.387
- type: map_at_1000
value: 26.57
- type: map_at_3
value: 21.278
- type: map_at_5
value: 23.179
- type: mrr_at_1
value: 30.709999999999997
- type: mrr_at_10
value: 38.994
- type: mrr_at_100
value: 39.993
- type: mrr_at_1000
value: 40.044999999999995
- type: mrr_at_3
value: 36.342999999999996
- type: mrr_at_5
value: 37.846999999999994
- type: ndcg_at_1
value: 30.709999999999997
- type: ndcg_at_10
value: 31.608999999999998
- type: ndcg_at_100
value: 38.807
- type: ndcg_at_1000
value: 42.208
- type: ndcg_at_3
value: 28.086
- type: ndcg_at_5
value: 29.323
- type: precision_at_1
value: 30.709999999999997
- type: precision_at_10
value: 8.688
- type: precision_at_100
value: 1.608
- type: precision_at_1000
value: 0.22100000000000003
- type: precision_at_3
value: 18.724
- type: precision_at_5
value: 13.950999999999999
- type: recall_at_1
value: 15.473
- type: recall_at_10
value: 38.361000000000004
- type: recall_at_100
value: 65.2
- type: recall_at_1000
value: 85.789
- type: recall_at_3
value: 25.401
- type: recall_at_5
value: 30.875999999999998
task:
type: Retrieval
- dataset:
config: default
name: MTEB HotpotQA-PL
revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907
split: test
type: clarin-knext/hotpotqa-pl
metrics:
- type: map_at_1
value: 38.096000000000004
- type: map_at_10
value: 51.44499999999999
- type: map_at_100
value: 52.325
- type: map_at_1000
value: 52.397000000000006
- type: map_at_3
value: 48.626999999999995
- type: map_at_5
value: 50.342
- type: mrr_at_1
value: 76.19200000000001
- type: mrr_at_10
value: 81.191
- type: mrr_at_100
value: 81.431
- type: mrr_at_1000
value: 81.443
- type: mrr_at_3
value: 80.30199999999999
- type: mrr_at_5
value: 80.85900000000001
- type: ndcg_at_1
value: 76.19200000000001
- type: ndcg_at_10
value: 60.9
- type: ndcg_at_100
value: 64.14699999999999
- type: ndcg_at_1000
value: 65.647
- type: ndcg_at_3
value: 56.818000000000005
- type: ndcg_at_5
value: 59.019999999999996
- type: precision_at_1
value: 76.19200000000001
- type: precision_at_10
value: 12.203
- type: precision_at_100
value: 1.478
- type: precision_at_1000
value: 0.168
- type: precision_at_3
value: 34.616
- type: precision_at_5
value: 22.515
- type: recall_at_1
value: 38.096000000000004
- type: recall_at_10
value: 61.013
- type: recall_at_100
value: 73.90299999999999
- type: recall_at_1000
value: 83.91
- type: recall_at_3
value: 51.92400000000001
- type: recall_at_5
value: 56.286
task:
type: Retrieval
- dataset:
config: default
name: MTEB MSMARCO-PL
revision: 8634c07806d5cce3a6138e260e59b81760a0a640
split: test
type: clarin-knext/msmarco-pl
metrics:
- type: map_at_1
value: 1.548
- type: map_at_10
value: 11.049000000000001
- type: map_at_100
value: 28.874
- type: map_at_1000
value: 34.931
- type: map_at_3
value: 4.162
- type: map_at_5
value: 6.396
- type: mrr_at_1
value: 90.69800000000001
- type: mrr_at_10
value: 92.093
- type: mrr_at_100
value: 92.345
- type: mrr_at_1000
value: 92.345
- type: mrr_at_3
value: 91.86
- type: mrr_at_5
value: 91.86
- type: ndcg_at_1
value: 74.031
- type: ndcg_at_10
value: 63.978
- type: ndcg_at_100
value: 53.101
- type: ndcg_at_1000
value: 60.675999999999995
- type: ndcg_at_3
value: 71.421
- type: ndcg_at_5
value: 68.098
- type: precision_at_1
value: 90.69800000000001
- type: precision_at_10
value: 71.86
- type: precision_at_100
value: 31.395
- type: precision_at_1000
value: 5.981
- type: precision_at_3
value: 84.49600000000001
- type: precision_at_5
value: 79.07
- type: recall_at_1
value: 1.548
- type: recall_at_10
value: 12.149000000000001
- type: recall_at_100
value: 40.794999999999995
- type: recall_at_1000
value: 67.974
- type: recall_at_3
value: 4.244
- type: recall_at_5
value: 6.608
task:
type: Retrieval
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 73.55413584398119
- type: f1
value: 69.65610882318181
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: 7d571f92784cd94a019292a1f45445077d0ef634
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 76.37188971082716
- type: f1
value: 75.64847309941361
task:
type: Classification
- dataset:
config: default
name: MTEB NFCorpus-PL
revision: 9a6f9567fda928260afed2de480d79c98bf0bec0
split: test
type: clarin-knext/nfcorpus-pl
metrics:
- type: map_at_1
value: 4.919
- type: map_at_10
value: 10.834000000000001
- type: map_at_100
value: 13.38
- type: map_at_1000
value: 14.581
- type: map_at_3
value: 8.198
- type: map_at_5
value: 9.428
- type: mrr_at_1
value: 41.176
- type: mrr_at_10
value: 50.083
- type: mrr_at_100
value: 50.559
- type: mrr_at_1000
value: 50.604000000000006
- type: mrr_at_3
value: 47.936
- type: mrr_at_5
value: 49.407000000000004
- type: ndcg_at_1
value: 39.628
- type: ndcg_at_10
value: 30.098000000000003
- type: ndcg_at_100
value: 27.061
- type: ndcg_at_1000
value: 35.94
- type: ndcg_at_3
value: 35.135
- type: ndcg_at_5
value: 33.335
- type: precision_at_1
value: 41.176
- type: precision_at_10
value: 22.259999999999998
- type: precision_at_100
value: 6.712
- type: precision_at_1000
value: 1.9060000000000001
- type: precision_at_3
value: 33.23
- type: precision_at_5
value: 29.04
- type: recall_at_1
value: 4.919
- type: recall_at_10
value: 14.196
- type: recall_at_100
value: 26.948
- type: recall_at_1000
value: 59.211000000000006
- type: recall_at_3
value: 9.44
- type: recall_at_5
value: 11.569
task:
type: Retrieval
- dataset:
config: default
name: MTEB NQ-PL
revision: f171245712cf85dd4700b06bef18001578d0ca8d
split: test
type: clarin-knext/nq-pl
metrics:
- type: map_at_1
value: 25.35
- type: map_at_10
value: 37.884
- type: map_at_100
value: 38.955
- type: map_at_1000
value: 39.007999999999996
- type: map_at_3
value: 34.239999999999995
- type: map_at_5
value: 36.398
- type: mrr_at_1
value: 28.737000000000002
- type: mrr_at_10
value: 39.973
- type: mrr_at_100
value: 40.844
- type: mrr_at_1000
value: 40.885
- type: mrr_at_3
value: 36.901
- type: mrr_at_5
value: 38.721
- type: ndcg_at_1
value: 28.708
- type: ndcg_at_10
value: 44.204
- type: ndcg_at_100
value: 48.978
- type: ndcg_at_1000
value: 50.33
- type: ndcg_at_3
value: 37.36
- type: ndcg_at_5
value: 40.912
- type: precision_at_1
value: 28.708
- type: precision_at_10
value: 7.367
- type: precision_at_100
value: 1.0030000000000001
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 17.034
- type: precision_at_5
value: 12.293999999999999
- type: recall_at_1
value: 25.35
- type: recall_at_10
value: 61.411
- type: recall_at_100
value: 82.599
- type: recall_at_1000
value: 92.903
- type: recall_at_3
value: 43.728
- type: recall_at_5
value: 51.854
task:
type: Retrieval
- dataset:
config: default
name: MTEB PAC
revision: None
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 69.04141326382856
- type: ap
value: 77.49422763833996
- type: f1
value: 66.73472657783407
task:
type: Classification
- dataset:
config: default
name: MTEB PPC
revision: None
split: test
type: PL-MTEB/ppc-pairclassification
metrics:
- type: cos_sim_accuracy
value: 81.0
- type: cos_sim_ap
value: 91.47194213011349
- type: cos_sim_f1
value: 84.73767885532592
- type: cos_sim_precision
value: 81.49847094801224
- type: cos_sim_recall
value: 88.24503311258279
- type: dot_accuracy
value: 81.0
- type: dot_ap
value: 91.47194213011349
- type: dot_f1
value: 84.73767885532592
- type: dot_precision
value: 81.49847094801224
- type: dot_recall
value: 88.24503311258279
- type: euclidean_accuracy
value: 81.0
- type: euclidean_ap
value: 91.47194213011349
- type: euclidean_f1
value: 84.73767885532592
- type: euclidean_precision
value: 81.49847094801224
- type: euclidean_recall
value: 88.24503311258279
- type: manhattan_accuracy
value: 81.0
- type: manhattan_ap
value: 91.46464475050571
- type: manhattan_f1
value: 84.48687350835321
- type: manhattan_precision
value: 81.31699846860643
- type: manhattan_recall
value: 87.91390728476821
- type: max_accuracy
value: 81.0
- type: max_ap
value: 91.47194213011349
- type: max_f1
value: 84.73767885532592
task:
type: PairClassification
- dataset:
config: default
name: MTEB PSC
revision: None
split: test
type: PL-MTEB/psc-pairclassification
metrics:
- type: cos_sim_accuracy
value: 97.6808905380334
- type: cos_sim_ap
value: 99.27948611836348
- type: cos_sim_f1
value: 96.15975422427034
- type: cos_sim_precision
value: 96.90402476780186
- type: cos_sim_recall
value: 95.42682926829268
- type: dot_accuracy
value: 97.6808905380334
- type: dot_ap
value: 99.2794861183635
- type: dot_f1
value: 96.15975422427034
- type: dot_precision
value: 96.90402476780186
- type: dot_recall
value: 95.42682926829268
- type: euclidean_accuracy
value: 97.6808905380334
- type: euclidean_ap
value: 99.2794861183635
- type: euclidean_f1
value: 96.15975422427034
- type: euclidean_precision
value: 96.90402476780186
- type: euclidean_recall
value: 95.42682926829268
- type: manhattan_accuracy
value: 97.6808905380334
- type: manhattan_ap
value: 99.28715055268721
- type: manhattan_f1
value: 96.14791987673343
- type: manhattan_precision
value: 97.19626168224299
- type: manhattan_recall
value: 95.1219512195122
- type: max_accuracy
value: 97.6808905380334
- type: max_ap
value: 99.28715055268721
- type: max_f1
value: 96.15975422427034
task:
type: PairClassification
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: None
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 86.16343490304708
- type: f1
value: 83.3442579486744
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: None
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 68.40080971659918
- type: f1
value: 53.13720751142237
task:
type: Classification
- dataset:
config: default
name: MTEB Quora-PL
revision: 0be27e93455051e531182b85e85e425aba12e9d4
split: test
type: clarin-knext/quora-pl
metrics:
- type: map_at_1
value: 63.322
- type: map_at_10
value: 76.847
- type: map_at_100
value: 77.616
- type: map_at_1000
value: 77.644
- type: map_at_3
value: 73.624
- type: map_at_5
value: 75.603
- type: mrr_at_1
value: 72.88
- type: mrr_at_10
value: 80.376
- type: mrr_at_100
value: 80.604
- type: mrr_at_1000
value: 80.61
- type: mrr_at_3
value: 78.92
- type: mrr_at_5
value: 79.869
- type: ndcg_at_1
value: 72.89999999999999
- type: ndcg_at_10
value: 81.43
- type: ndcg_at_100
value: 83.394
- type: ndcg_at_1000
value: 83.685
- type: ndcg_at_3
value: 77.62599999999999
- type: ndcg_at_5
value: 79.656
- type: precision_at_1
value: 72.89999999999999
- type: precision_at_10
value: 12.548
- type: precision_at_100
value: 1.4869999999999999
- type: precision_at_1000
value: 0.155
- type: precision_at_3
value: 34.027
- type: precision_at_5
value: 22.654
- type: recall_at_1
value: 63.322
- type: recall_at_10
value: 90.664
- type: recall_at_100
value: 97.974
- type: recall_at_1000
value: 99.636
- type: recall_at_3
value: 80.067
- type: recall_at_5
value: 85.526
task:
type: Retrieval
- dataset:
config: default
name: MTEB SCIDOCS-PL
revision: 45452b03f05560207ef19149545f168e596c9337
split: test
type: clarin-knext/scidocs-pl
metrics:
- type: map_at_1
value: 3.95
- type: map_at_10
value: 9.658999999999999
- type: map_at_100
value: 11.384
- type: map_at_1000
value: 11.677
- type: map_at_3
value: 7.055
- type: map_at_5
value: 8.244
- type: mrr_at_1
value: 19.5
- type: mrr_at_10
value: 28.777
- type: mrr_at_100
value: 29.936
- type: mrr_at_1000
value: 30.009999999999998
- type: mrr_at_3
value: 25.55
- type: mrr_at_5
value: 27.284999999999997
- type: ndcg_at_1
value: 19.5
- type: ndcg_at_10
value: 16.589000000000002
- type: ndcg_at_100
value: 23.879
- type: ndcg_at_1000
value: 29.279
- type: ndcg_at_3
value: 15.719
- type: ndcg_at_5
value: 13.572000000000001
- type: precision_at_1
value: 19.5
- type: precision_at_10
value: 8.62
- type: precision_at_100
value: 1.924
- type: precision_at_1000
value: 0.322
- type: precision_at_3
value: 14.6
- type: precision_at_5
value: 11.78
- type: recall_at_1
value: 3.95
- type: recall_at_10
value: 17.477999999999998
- type: recall_at_100
value: 38.99
- type: recall_at_1000
value: 65.417
- type: recall_at_3
value: 8.883000000000001
- type: recall_at_5
value: 11.933
task:
type: Retrieval
- dataset:
config: default
name: MTEB SICK-E-PL
revision: None
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics:
- type: cos_sim_accuracy
value: 83.48960456583775
- type: cos_sim_ap
value: 76.31522115825375
- type: cos_sim_f1
value: 70.35573122529645
- type: cos_sim_precision
value: 70.9934735315446
- type: cos_sim_recall
value: 69.72934472934473
- type: dot_accuracy
value: 83.48960456583775
- type: dot_ap
value: 76.31522115825373
- type: dot_f1
value: 70.35573122529645
- type: dot_precision
value: 70.9934735315446
- type: dot_recall
value: 69.72934472934473
- type: euclidean_accuracy
value: 83.48960456583775
- type: euclidean_ap
value: 76.31522115825373
- type: euclidean_f1
value: 70.35573122529645
- type: euclidean_precision
value: 70.9934735315446
- type: euclidean_recall
value: 69.72934472934473
- type: manhattan_accuracy
value: 83.46922136159804
- type: manhattan_ap
value: 76.18474601388084
- type: manhattan_f1
value: 70.34779490856937
- type: manhattan_precision
value: 70.83032490974729
- type: manhattan_recall
value: 69.87179487179486
- type: max_accuracy
value: 83.48960456583775
- type: max_ap
value: 76.31522115825375
- type: max_f1
value: 70.35573122529645
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: None
split: test
type: PL-MTEB/sickr-pl-sts
metrics:
- type: cos_sim_pearson
value: 77.95374883876302
- type: cos_sim_spearman
value: 73.77630219171942
- type: euclidean_pearson
value: 75.81927069594934
- type: euclidean_spearman
value: 73.7763211303831
- type: manhattan_pearson
value: 76.03126859057528
- type: manhattan_spearman
value: 73.96528138013369
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cos_sim_pearson
value: 37.388282764841826
- type: cos_sim_spearman
value: 40.83477184710897
- type: euclidean_pearson
value: 26.754737044177805
- type: euclidean_spearman
value: 40.83477184710897
- type: manhattan_pearson
value: 26.760453110872458
- type: manhattan_spearman
value: 41.034477441383856
task:
type: STS
- dataset:
config: default
name: MTEB SciFact-PL
revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e
split: test
type: clarin-knext/scifact-pl
metrics:
- type: map_at_1
value: 49.15
- type: map_at_10
value: 61.690999999999995
- type: map_at_100
value: 62.348000000000006
- type: map_at_1000
value: 62.38
- type: map_at_3
value: 58.824
- type: map_at_5
value: 60.662000000000006
- type: mrr_at_1
value: 51.333
- type: mrr_at_10
value: 62.731
- type: mrr_at_100
value: 63.245
- type: mrr_at_1000
value: 63.275000000000006
- type: mrr_at_3
value: 60.667
- type: mrr_at_5
value: 61.93300000000001
- type: ndcg_at_1
value: 51.333
- type: ndcg_at_10
value: 67.168
- type: ndcg_at_100
value: 69.833
- type: ndcg_at_1000
value: 70.56700000000001
- type: ndcg_at_3
value: 62.40599999999999
- type: ndcg_at_5
value: 65.029
- type: precision_at_1
value: 51.333
- type: precision_at_10
value: 9.333
- type: precision_at_100
value: 1.0699999999999998
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.333
- type: precision_at_5
value: 17.067
- type: recall_at_1
value: 49.15
- type: recall_at_10
value: 82.533
- type: recall_at_100
value: 94.167
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 69.917
- type: recall_at_5
value: 76.356
task:
type: Retrieval
- dataset:
config: default
name: MTEB TRECCOVID-PL
revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd
split: test
type: clarin-knext/trec-covid-pl
metrics:
- type: map_at_1
value: 0.261
- type: map_at_10
value: 2.1260000000000003
- type: map_at_100
value: 12.171999999999999
- type: map_at_1000
value: 26.884999999999998
- type: map_at_3
value: 0.695
- type: map_at_5
value: 1.134
- type: mrr_at_1
value: 96.0
- type: mrr_at_10
value: 96.952
- type: mrr_at_100
value: 96.952
- type: mrr_at_1000
value: 96.952
- type: mrr_at_3
value: 96.667
- type: mrr_at_5
value: 96.667
- type: ndcg_at_1
value: 92.0
- type: ndcg_at_10
value: 81.193
- type: ndcg_at_100
value: 61.129
- type: ndcg_at_1000
value: 51.157
- type: ndcg_at_3
value: 85.693
- type: ndcg_at_5
value: 84.129
- type: precision_at_1
value: 96.0
- type: precision_at_10
value: 85.39999999999999
- type: precision_at_100
value: 62.03999999999999
- type: precision_at_1000
value: 22.224
- type: precision_at_3
value: 88.0
- type: precision_at_5
value: 88.0
- type: recall_at_1
value: 0.261
- type: recall_at_10
value: 2.262
- type: recall_at_100
value: 14.981
- type: recall_at_1000
value: 46.837
- type: recall_at_3
value: 0.703
- type: recall_at_5
value: 1.172
task:
type: Retrieval
- dataset:
config: default
name: MTEB AlloProfClusteringP2P
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
split: test
type: lyon-nlp/alloprof
metrics:
- type: v_measure
value: 70.55290063940157
task:
type: Clustering
- dataset:
config: default
name: MTEB AlloProfClusteringS2S
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
split: test
type: lyon-nlp/alloprof
metrics:
- type: v_measure
value: 55.41500719337263
task:
type: Clustering
- dataset:
config: default
name: MTEB AlloprofReranking
revision: 666fdacebe0291776e86f29345663dfaf80a0db9
split: test
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
metrics:
- type: map
value: 73.48697375332002
- type: mrr
value: 75.01836585523822
task:
type: Reranking
- dataset:
config: default
name: MTEB AlloprofRetrieval
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
split: test
type: lyon-nlp/alloprof
metrics:
- type: map_at_1
value: 38.454
- type: map_at_10
value: 51.605000000000004
- type: map_at_100
value: 52.653000000000006
- type: map_at_1000
value: 52.697
- type: map_at_3
value: 48.304
- type: map_at_5
value: 50.073
- type: mrr_at_1
value: 43.307
- type: mrr_at_10
value: 54.400000000000006
- type: mrr_at_100
value: 55.147999999999996
- type: mrr_at_1000
value: 55.174
- type: mrr_at_3
value: 51.77
- type: mrr_at_5
value: 53.166999999999994
- type: ndcg_at_1
value: 43.307
- type: ndcg_at_10
value: 57.891000000000005
- type: ndcg_at_100
value: 62.161
- type: ndcg_at_1000
value: 63.083
- type: ndcg_at_3
value: 51.851
- type: ndcg_at_5
value: 54.605000000000004
- type: precision_at_1
value: 43.307
- type: precision_at_10
value: 9.033
- type: precision_at_100
value: 1.172
- type: precision_at_1000
value: 0.127
- type: precision_at_3
value: 22.798
- type: precision_at_5
value: 15.492
- type: recall_at_1
value: 38.454
- type: recall_at_10
value: 74.166
- type: recall_at_100
value: 92.43599999999999
- type: recall_at_1000
value: 99.071
- type: recall_at_3
value: 58.087
- type: recall_at_5
value: 64.568
task:
type: Retrieval
- dataset:
config: fr
name: MTEB AmazonReviewsClassification (fr)
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
split: test
type: mteb/amazon_reviews_multi
metrics:
- type: accuracy
value: 53.474
- type: f1
value: 50.38275392350236
task:
type: Classification
- dataset:
config: default
name: MTEB BSARDRetrieval
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
split: test
type: maastrichtlawtech/bsard
metrics:
- type: map_at_1
value: 2.252
- type: map_at_10
value: 4.661
- type: map_at_100
value: 5.271
- type: map_at_1000
value: 5.3629999999999995
- type: map_at_3
value: 3.604
- type: map_at_5
value: 4.3020000000000005
- type: mrr_at_1
value: 2.252
- type: mrr_at_10
value: 4.661
- type: mrr_at_100
value: 5.271
- type: mrr_at_1000
value: 5.3629999999999995
- type: mrr_at_3
value: 3.604
- type: mrr_at_5
value: 4.3020000000000005
- type: ndcg_at_1
value: 2.252
- type: ndcg_at_10
value: 6.3020000000000005
- type: ndcg_at_100
value: 10.342
- type: ndcg_at_1000
value: 13.475999999999999
- type: ndcg_at_3
value: 4.0649999999999995
- type: ndcg_at_5
value: 5.344
- type: precision_at_1
value: 2.252
- type: precision_at_10
value: 1.171
- type: precision_at_100
value: 0.333
- type: precision_at_1000
value: 0.059000000000000004
- type: precision_at_3
value: 1.802
- type: precision_at_5
value: 1.712
- type: recall_at_1
value: 2.252
- type: recall_at_10
value: 11.712
- type: recall_at_100
value: 33.333
- type: recall_at_1000
value: 59.458999999999996
- type: recall_at_3
value: 5.405
- type: recall_at_5
value: 8.559
task:
type: Retrieval
- dataset:
config: default
name: MTEB HALClusteringS2S
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
split: test
type: lyon-nlp/clustering-hal-s2s
metrics:
- type: v_measure
value: 28.301882091023288
task:
type: Clustering
- dataset:
config: default
name: MTEB MLSUMClusteringP2P
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
split: test
type: mlsum
metrics:
- type: v_measure
value: 45.26992995191701
task:
type: Clustering
- dataset:
config: default
name: MTEB MLSUMClusteringS2S
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
split: test
type: mlsum
metrics:
- type: v_measure
value: 42.773174876871145
task:
type: Clustering
- dataset:
config: fr
name: MTEB MTOPDomainClassification (fr)
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
split: test
type: mteb/mtop_domain
metrics:
- type: accuracy
value: 93.47635452552458
- type: f1
value: 93.19922617577213
task:
type: Classification
- dataset:
config: fr
name: MTEB MTOPIntentClassification (fr)
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
split: test
type: mteb/mtop_intent
metrics:
- type: accuracy
value: 80.2317569683683
- type: f1
value: 56.18060418621901
task:
type: Classification
- dataset:
config: fra
name: MTEB MasakhaNEWSClassification (fra)
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
split: test
type: masakhane/masakhanews
metrics:
- type: accuracy
value: 85.18957345971565
- type: f1
value: 80.829981537394
task:
type: Classification
- dataset:
config: fra
name: MTEB MasakhaNEWSClusteringP2P (fra)
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
split: test
type: masakhane/masakhanews
metrics:
- type: v_measure
value: 71.04138999801822
task:
type: Clustering
- dataset:
config: fra
name: MTEB MasakhaNEWSClusteringS2S (fra)
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
split: test
type: masakhane/masakhanews
metrics:
- type: v_measure
value: 71.7056263158008
task:
type: Clustering
- dataset:
config: fr
name: MTEB MassiveIntentClassification (fr)
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 76.65097511768661
- type: f1
value: 73.82441070598712
task:
type: Classification
- dataset:
config: fr
name: MTEB MassiveScenarioClassification (fr)
revision: 7d571f92784cd94a019292a1f45445077d0ef634
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 79.09885675857431
- type: f1
value: 78.28407777434224
task:
type: Classification
- dataset:
config: fr
name: MTEB MintakaRetrieval (fr)
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
split: test
type: jinaai/mintakaqa
metrics:
- type: map_at_1
value: 25.307000000000002
- type: map_at_10
value: 36.723
- type: map_at_100
value: 37.713
- type: map_at_1000
value: 37.769000000000005
- type: map_at_3
value: 33.77
- type: map_at_5
value: 35.463
- type: mrr_at_1
value: 25.307000000000002
- type: mrr_at_10
value: 36.723
- type: mrr_at_100
value: 37.713
- type: mrr_at_1000
value: 37.769000000000005
- type: mrr_at_3
value: 33.77
- type: mrr_at_5
value: 35.463
- type: ndcg_at_1
value: 25.307000000000002
- type: ndcg_at_10
value: 42.559999999999995
- type: ndcg_at_100
value: 47.457
- type: ndcg_at_1000
value: 49.162
- type: ndcg_at_3
value: 36.461
- type: ndcg_at_5
value: 39.504
- type: precision_at_1
value: 25.307000000000002
- type: precision_at_10
value: 6.106
- type: precision_at_100
value: 0.8420000000000001
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 14.741999999999999
- type: precision_at_5
value: 10.319
- type: recall_at_1
value: 25.307000000000002
- type: recall_at_10
value: 61.056999999999995
- type: recall_at_100
value: 84.152
- type: recall_at_1000
value: 98.03399999999999
- type: recall_at_3
value: 44.226
- type: recall_at_5
value: 51.597
task:
type: Retrieval
- dataset:
config: fr
name: MTEB OpusparcusPC (fr)
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
split: test
type: GEM/opusparcus
metrics:
- type: cos_sim_accuracy
value: 99.90069513406156
- type: cos_sim_ap
value: 100.0
- type: cos_sim_f1
value: 99.95032290114257
- type: cos_sim_precision
value: 100.0
- type: cos_sim_recall
value: 99.90069513406156
- type: dot_accuracy
value: 99.90069513406156
- type: dot_ap
value: 100.0
- type: dot_f1
value: 99.95032290114257
- type: dot_precision
value: 100.0
- type: dot_recall
value: 99.90069513406156
- type: euclidean_accuracy
value: 99.90069513406156
- type: euclidean_ap
value: 100.0
- type: euclidean_f1
value: 99.95032290114257
- type: euclidean_precision
value: 100.0
- type: euclidean_recall
value: 99.90069513406156
- type: manhattan_accuracy
value: 99.90069513406156
- type: manhattan_ap
value: 100.0
- type: manhattan_f1
value: 99.95032290114257
- type: manhattan_precision
value: 100.0
- type: manhattan_recall
value: 99.90069513406156
- type: max_accuracy
value: 99.90069513406156
- type: max_ap
value: 100.0
- type: max_f1
value: 99.95032290114257
task:
type: PairClassification
- dataset:
config: fr
name: MTEB PawsX (fr)
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
split: test
type: paws-x
metrics:
- type: cos_sim_accuracy
value: 70.8
- type: cos_sim_ap
value: 73.7671529695957
- type: cos_sim_f1
value: 68.80964339527875
- type: cos_sim_precision
value: 62.95955882352941
- type: cos_sim_recall
value: 75.85825027685493
- type: dot_accuracy
value: 70.8
- type: dot_ap
value: 73.78345265366947
- type: dot_f1
value: 68.80964339527875
- type: dot_precision
value: 62.95955882352941
- type: dot_recall
value: 75.85825027685493
- type: euclidean_accuracy
value: 70.8
- type: euclidean_ap
value: 73.7671529695957
- type: euclidean_f1
value: 68.80964339527875
- type: euclidean_precision
value: 62.95955882352941
- type: euclidean_recall
value: 75.85825027685493
- type: manhattan_accuracy
value: 70.75
- type: manhattan_ap
value: 73.78996383615953
- type: manhattan_f1
value: 68.79432624113475
- type: manhattan_precision
value: 63.39869281045751
- type: manhattan_recall
value: 75.1937984496124
- type: max_accuracy
value: 70.8
- type: max_ap
value: 73.78996383615953
- type: max_f1
value: 68.80964339527875
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICKFr
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
split: test
type: Lajavaness/SICK-fr
metrics:
- type: cos_sim_pearson
value: 84.03253762760392
- type: cos_sim_spearman
value: 79.68280105762004
- type: euclidean_pearson
value: 80.98265050044444
- type: euclidean_spearman
value: 79.68233242682867
- type: manhattan_pearson
value: 80.9678911810704
- type: manhattan_spearman
value: 79.70264097683109
task:
type: STS
- dataset:
config: fr
name: MTEB STS22 (fr)
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cos_sim_pearson
value: 80.56896987572884
- type: cos_sim_spearman
value: 81.84352499523287
- type: euclidean_pearson
value: 80.40831759421305
- type: euclidean_spearman
value: 81.84352499523287
- type: manhattan_pearson
value: 80.74333857561238
- type: manhattan_spearman
value: 82.41503246733892
task:
type: STS
- dataset:
config: fr
name: MTEB STSBenchmarkMultilingualSTS (fr)
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
split: test
type: stsb_multi_mt
metrics:
- type: cos_sim_pearson
value: 82.71826762276979
- type: cos_sim_spearman
value: 82.25433354916042
- type: euclidean_pearson
value: 81.87115571724316
- type: euclidean_spearman
value: 82.25322342890107
- type: manhattan_pearson
value: 82.11174867527224
- type: manhattan_spearman
value: 82.55905365203084
task:
type: STS
- dataset:
config: default
name: MTEB SummEvalFr
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
split: test
type: lyon-nlp/summarization-summeval-fr-p2p
metrics:
- type: cos_sim_pearson
value: 30.659441623392887
- type: cos_sim_spearman
value: 30.501134097353315
- type: dot_pearson
value: 30.659444768851056
- type: dot_spearman
value: 30.501134097353315
task:
type: Summarization
- dataset:
config: default
name: MTEB SyntecReranking
revision: b205c5084a0934ce8af14338bf03feb19499c84d
split: test
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
metrics:
- type: map
value: 94.03333333333333
- type: mrr
value: 94.03333333333333
task:
type: Reranking
- dataset:
config: default
name: MTEB SyntecRetrieval
revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff
split: test
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
metrics:
- type: map_at_1
value: 79.0
- type: map_at_10
value: 87.61
- type: map_at_100
value: 87.655
- type: map_at_1000
value: 87.655
- type: map_at_3
value: 87.167
- type: map_at_5
value: 87.36699999999999
- type: mrr_at_1
value: 79.0
- type: mrr_at_10
value: 87.61
- type: mrr_at_100
value: 87.655
- type: mrr_at_1000
value: 87.655
- type: mrr_at_3
value: 87.167
- type: mrr_at_5
value: 87.36699999999999
- type: ndcg_at_1
value: 79.0
- type: ndcg_at_10
value: 90.473
- type: ndcg_at_100
value: 90.694
- type: ndcg_at_1000
value: 90.694
- type: ndcg_at_3
value: 89.464
- type: ndcg_at_5
value: 89.851
- type: precision_at_1
value: 79.0
- type: precision_at_10
value: 9.9
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 32.0
- type: precision_at_5
value: 19.400000000000002
- type: recall_at_1
value: 79.0
- type: recall_at_10
value: 99.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 96.0
- type: recall_at_5
value: 97.0
task:
type: Retrieval
- dataset:
config: fr
name: MTEB XPQARetrieval (fr)
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
split: test
type: jinaai/xpqa
metrics:
- type: map_at_1
value: 39.395
- type: map_at_10
value: 59.123999999999995
- type: map_at_100
value: 60.704
- type: map_at_1000
value: 60.760000000000005
- type: map_at_3
value: 53.187
- type: map_at_5
value: 56.863
- type: mrr_at_1
value: 62.083
- type: mrr_at_10
value: 68.87299999999999
- type: mrr_at_100
value: 69.46900000000001
- type: mrr_at_1000
value: 69.48299999999999
- type: mrr_at_3
value: 66.8
- type: mrr_at_5
value: 67.928
- type: ndcg_at_1
value: 62.083
- type: ndcg_at_10
value: 65.583
- type: ndcg_at_100
value: 70.918
- type: ndcg_at_1000
value: 71.72800000000001
- type: ndcg_at_3
value: 60.428000000000004
- type: ndcg_at_5
value: 61.853
- type: precision_at_1
value: 62.083
- type: precision_at_10
value: 15.033
- type: precision_at_100
value: 1.9529999999999998
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 36.315
- type: precision_at_5
value: 25.955000000000002
- type: recall_at_1
value: 39.395
- type: recall_at_10
value: 74.332
- type: recall_at_100
value: 94.729
- type: recall_at_1000
value: 99.75500000000001
- type: recall_at_3
value: 57.679
- type: recall_at_5
value: 65.036
task:
type: Retrieval
---
## gte-Qwen2-1.5B-instruct
**gte-Qwen2-1.5B-instruct** is the latest model in the gte (General Text Embedding) model family. The model is built on [Qwen2-1.5B](https://huggingface.co/Qwen/Qwen2-1.5B) LLM model and use the same training data and strategies as the [gte-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) model.
The model incorporates several key advancements:
- Integration of bidirectional attention mechanisms, enriching its contextual understanding.
- Instruction tuning, applied solely on the query side for streamlined efficiency
- Comprehensive training across a vast, multilingual text corpus spanning diverse domains and scenarios. This training leverages both weakly supervised and supervised data, ensuring the model's applicability across numerous languages and a wide array of downstream tasks.
## Model Information
- Model Size: 1.5B
- Embedding Dimension: 1536
- Max Input Tokens: 32k
## Requirements
```
transformers>=4.39.2
flash_attn>=2.5.6
```
## Usage
### Sentence Transformers
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("Alibaba-NLP/gte-Qwen2-1.5B-instruct", trust_remote_code=True)
# In case you want to reduce the maximum length:
model.max_seq_length = 8192
queries = [
"how much protein should a female eat",
"summit define",
]
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.",
]
query_embeddings = model.encode(queries, prompt_name="query")
document_embeddings = model.encode(documents)
scores = (query_embeddings @ document_embeddings.T) * 100
print(scores.tolist())
```
Observe the [config_sentence_transformers.json](config_sentence_transformers.json) to see all pre-built prompt names. Otherwise, you can use `model.encode(queries, prompt="Instruct: ...\nQuery: "` to use a custom prompt of your choice.
### Transformers
```python
import torch
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def last_token_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
if left_padding:
return last_hidden_states[:, -1]
else:
sequence_lengths = attention_mask.sum(dim=1) - 1
batch_size = last_hidden_states.shape[0]
return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery: {query}'
# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
get_detailed_instruct(task, 'how much protein should a female eat'),
get_detailed_instruct(task, 'summit define')
]
# No need to add instruction for retrieval documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
input_texts = queries + documents
tokenizer = AutoTokenizer.from_pretrained('Alibaba-NLP/gte-Qwen2-1.5B-instruct', trust_remote_code=True)
model = AutoModel.from_pretrained('Alibaba-NLP/gte-Qwen2-1.5B-instruct', trust_remote_code=True)
max_length = 8192
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=max_length, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
### infinity_emb
Usage via [infinity, MIT Licensed](https://github.com/michaelfeil/infinity).
```bash
docker run \
--gpus "0" -p "7997":"7997" \
michaelf34/infinity:0.0.68-trt-onnx \
v2 --model-id Alibaba-NLP/gte-Qwen2-1.5B-instruct --revision "refs/pr/20" --dtype bfloat16 --batch-size 16 --device cuda --engine torch --port 7997 --no-bettertransformer
```
## Evaluation
### MTEB & C-MTEB
You can use the [scripts/eval_mteb.py](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct/blob/main/scripts/eval_mteb.py) to reproduce the following result of **gte-Qwen2-1.5B-instruct** on MTEB(English)/C-MTEB(Chinese):
| Model Name | MTEB(56) | C-MTEB(35) | MTEB-fr(26) | MTEB-pl(26) |
|:----:|:---------:|:----------:|:----------:|:----------:|
| [bge-base-en-1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 64.23 | - | - | - |
| [bge-large-en-1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 63.55 | - | - | - |
| [gte-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 65.39 | - | - | - |
| [gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 64.11 | - | - | - |
| [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) | 64.68 | - | - | - |
| [acge_text_embedding](https://huggingface.co/aspire/acge_text_embedding) | - | 69.07 | - | - |
| [stella-mrl-large-zh-v3.5-1792d](https://huggingface.co/infgrad/stella-mrl-large-zh-v3.5-1792d) | - | 68.55 | - | - |
| [gte-large-zh](https://huggingface.co/thenlper/gte-large-zh) | - | 66.72 | - | - |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 59.45 | 56.21 | - | - |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 61.50 | 58.81 | - | - |
| [e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct) | 66.63 | 60.81 | - | - |
| [gte-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) | 67.34 | 69.52 | - | - |
| [NV-Embed-v1](https://huggingface.co/nvidia/NV-Embed-v1) | 69.32 | - | - | - |
| [**gte-Qwen2-7B-instruct**](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) | **70.24** | **72.05** | **68.25** | **67.86** |
| [**gte-Qwen2-1.5B-instruct**](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) | **67.16** | **67.65** | **66.60** | **64.04** |
### GTE Models
The gte series models have consistently released two types of models: encoder-only models (based on the BERT architecture) and decode-only models (based on the LLM architecture).
| Models | Language | Max Sequence Length | Dimension | Model Size (Memory Usage, fp32) |
|:-------------------------------------------------------------------------------------:|:--------:|:-----: |:---------:|:-------------------------------:|
| [GTE-large-zh](https://huggingface.co/thenlper/gte-large-zh) | Chinese | 512 | 1024 | 1.25GB |
| [GTE-base-zh](https://huggingface.co/thenlper/gte-base-zh) | Chinese | 512 | 512 | 0.41GB |
| [GTE-small-zh](https://huggingface.co/thenlper/gte-small-zh) | Chinese | 512 | 512 | 0.12GB |
| [GTE-large](https://huggingface.co/thenlper/gte-large) | English | 512 | 1024 | 1.25GB |
| [GTE-base](https://huggingface.co/thenlper/gte-base) | English | 512 | 512 | 0.21GB |
| [GTE-small](https://huggingface.co/thenlper/gte-small) | English | 512 | 384 | 0.10GB |
| [GTE-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | English | 8192 | 1024 | 1.74GB |
| [GTE-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) | English | 8192 | 768 | 0.51GB |
| [GTE-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) | Multilingual | 32000 | 4096 | 26.45GB |
| [GTE-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) | Multilingual | 32000 | 3584 | 26.45GB |
| [GTE-Qwen2-1.5B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) | Multilingual | 32000 | 1536 | 6.62GB |
## Cloud API Services
In addition to the open-source [GTE](https://huggingface.co/collections/Alibaba-NLP/gte-models-6680f0b13f885cb431e6d469) series models, GTE series models are also available as commercial API services on Alibaba Cloud.
- [Embedding Models](https://help.aliyun.com/zh/model-studio/developer-reference/general-text-embedding/): Three versions of the text embedding models are available: text-embedding-v1/v2/v3, with v3 being the latest API service.
- [ReRank Models](https://help.aliyun.com/zh/model-studio/developer-reference/general-text-sorting-model/): The gte-rerank model service is available.
Note that the models behind the commercial APIs are not entirely identical to the open-source models.
## Community support
### Fine-tuning
GTE models can be fine-tuned with a third party framework SWIFT.
```shell
pip install ms-swift -U
```
```shell
# check: https://swift.readthedocs.io/en/latest/BestPractices/Embedding.html
nproc_per_node=8
NPROC_PER_NODE=$nproc_per_node \
USE_HF=1 \
swift sft \
--model Alibaba-NLP/gte-Qwen2-1.5B-instruct \
--train_type lora \
--dataset 'sentence-transformers/stsb' \
--torch_dtype bfloat16 \
--num_train_epochs 10 \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps $(expr 64 / $nproc_per_node) \
--eval_steps 100 \
--save_steps 100 \
--eval_strategy steps \
--use_chat_template false \
--save_total_limit 5 \
--logging_steps 5 \
--output_dir output \
--warmup_ratio 0.05 \
--learning_rate 5e-6 \
--deepspeed zero3 \
--dataloader_num_workers 4 \
--task_type embedding \
--loss_type cosine_similarity \
--dataloader_drop_last true
```
## Citation
If you find our paper or models helpful, please consider cite:
```
@article{li2023towards,
title={Towards general text embeddings with multi-stage contrastive learning},
author={Li, Zehan and Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan},
journal={arXiv preprint arXiv:2308.03281},
year={2023}
}
```
|
WizWhite/helmer-osslund-post-impressionist-painter | WizWhite | 2025-05-28T13:08:36Z | 0 | 0 | diffusers | [
"diffusers",
"art",
"expressionism",
"landscape",
"lora",
"migrated",
"naturalism",
"nordic",
"painting",
"post-impressionism",
"scandinavian",
"stable-diffusion",
"style",
"swedish",
"symbolism",
"template:sd-lora",
"text-to-image",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-28T13:08:31Z | ---
license: other
license_name: "bespoke-lora-trained-license"
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=False
tags:
- art
- diffusers
- expressionism
- landscape
- lora
- migrated
- naturalism
- nordic
- painting
- post-impressionism
- scandinavian
- stable-diffusion
- style
- swedish
- symbolism
- template:sd-lora
- text-to-image
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Post-Impressionist painting by artist Helmer Osslund.
widget:
- text: 'Vibrant Post-Impressionist painting by artist Helmer Osslund. A dramatic, expressionistic landscape painting depicting rolling golden fields against deep blue, rugged mountains under a turbulent sky. Swirling storm clouds dominate the upper portion of the canvas, casting an ominous yet awe-inspiring atmosphere. Patches of snow cling to the peaks, contrasting with the fiery hues of the autumnal terrain. In the foreground, small grazing animals wander through the windswept grass, adding a sense of scale and life to the vast wilderness. The brushwork is bold and dynamic, with thick, textured strokes emphasizing the raw power of nature. The composition feels both mythical and symbolic, evoking the untamed beauty of the Nordic wilderness. Painted in a rich, post-impressionist style with a striking color palette, blending realism with a dreamlike, almost mystical quality.
'
output:
url: >-
56845067.jpeg
- text: 'Post-Impressionist painting by artist Helmer Osslund. portrait of Wizard Whitebeard with a long white beard and a wide-brimmed hat. The brushwork is expressive, with visible texture enhancing the nature of the subject. The mood is both melancholic and majestic, evoking a sense of solitude and power. Inspired by Nordic or National Romanticism painting traditions, with an emphasis on atmosphere and dynamic movement in the artwork'
output:
url: >-
56845226.jpeg
- text: 'Post-Impressionist painting by artist helmer osslund. A sleek sailing yacht with striking red sails glides across the still, icy waters of the Arctic, reflecting perfectly in the glass-like surface. Towering white icebergs rise in the background, their rugged textures contrasting with the smooth, mirror-like reflection. A small group of adventurers stands on deck, dressed in heavy cold-weather gear, gazing at the frozen expanse. The sky is a deep, moody blue, hinting at the fading Arctic light, casting a soft glow over the scene. A few scattered ice fragments float nearby, emphasizing the yacht’s delicate passage through this frozen wilderness. The atmosphere is serene yet adventurous, capturing the raw beauty and isolation of the polar environment. Hyperrealistic photography style, crisp details, and natural lighting.
'
output:
url: >-
56845402.jpeg
- text: 'Post-Impressionist painting by artist Helmer Osslund. Mona Lisa. The brushwork is expressive, with visible texture enhancing the nature of the subject. The mood is both melancholic and majestic, evoking a sense of solitude and power. Inspired by Nordic or National Romanticism painting traditions, with an emphasis on atmosphere and dynamic movement in the artwork'
output:
url: >-
56845834.jpeg
- text: 'Vibrant Post-Impressionist painting by artist Helmer Osslund. A breathtaking landscape painting in a bold, post-impressionistic style, depicting a vast, rugged terrain with lapporten, rolling hills, icy lakes, and distant snow-capped mountains under a swirling, dramatic sky. The scene is painted in earthy tones of ochre, deep blues, and muted grays, with patches of white snow clinging to the ground. In the foreground, dark jagged rocks and twisted, leafless trees frame the composition, adding depth and contrast. The brushwork is expressive, with visible texture enhancing the raw, untamed nature of the land. The mood is both melancholic and majestic, evoking a sense of solitude and the power of the wilderness. Inspired by Nordic or Romantic landscape painting traditions, with an emphasis on atmosphere and dynamic movement in the clouds and terrain.
'
output:
url: >-
56846540.jpeg
---
# Helmer Osslund - Post-Impressionist Painter
<Gallery />
## Model description
<p><strong>LoRA Strenght:</strong> <code>1.0 - 1.2</code><br /><strong>Prompt start:</strong><code>Post-Impressionist painting by artist Helmer Osslund.</code></p><p>Post-Impressionistic style LoRA inspired by Swedish painter Helmer Osslund (1866–1938).<br />• Dynamic brushwork<br />• Expressive Landscapes<br />• Vibrant colors<br />• Great textures</p><p><strong>Boilerplate prompt help (can be added to the end if wanted):</strong><code>The brushwork is expressive, with visible texture enhancing the nature of the subject. The mood is both melancholic and majestic, evoking a sense of solitude and power. Inspired by Nordic or National Romanticism painting traditions, with an emphasis on atmosphere and dynamic movement in the artwork.</code></p><p></p><p>Try it out on <a target="_blank" rel="ugc" href="https://www.mage.space/play/2c022528a13b4a0287919d91b3e2ec93">mage.space</a>, Free for all until 20th March</p>
## Trigger words
You should use `Post-Impressionist painting by artist Helmer Osslund.` to trigger the generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/WizWhite/helmer-osslund-post-impressionist-painter/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipe.load_lora_weights('WizWhite/helmer-osslund-post-impressionist-painter', weight_name='helmer-osslund.safetensors')
image = pipeline('Vibrant Post-Impressionist painting by artist Helmer Osslund. A dramatic, expressionistic landscape painting depicting rolling golden fields against deep blue, rugged mountains under a turbulent sky. Swirling storm clouds dominate the upper portion of the canvas, casting an ominous yet awe-inspiring atmosphere. Patches of snow cling to the peaks, contrasting with the fiery hues of the autumnal terrain. In the foreground, small grazing animals wander through the windswept grass, adding a sense of scale and life to the vast wilderness. The brushwork is bold and dynamic, with thick, textured strokes emphasizing the raw power of nature. The composition feels both mythical and symbolic, evoking the untamed beauty of the Nordic wilderness. Painted in a rich, post-impressionist style with a striking color palette, blending realism with a dreamlike, almost mystical quality.
').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
FormlessAI/38f7eee3-44db-43de-b962-aa533113dd77 | FormlessAI | 2025-05-28T12:52:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:Qwen/Qwen3-1.7B-Base",
"base_model:finetune:Qwen/Qwen3-1.7B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T07:28:46Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_leaping_wildebeest | chinna6 | 2025-05-28T12:47:19Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am lumbering leaping wildebeest",
"unsloth",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-22T10:43:46Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
BeckerAnas/grateful-shape-212 | BeckerAnas | 2025-05-28T12:45:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"convnextv2",
"image-classification",
"generated_from_trainer",
"base_model:facebook/convnextv2-tiny-1k-224",
"base_model:finetune:facebook/convnextv2-tiny-1k-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-28T12:20:46Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-camouflaged_padded_chicken | chinna6 | 2025-05-28T12:45:40Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am camouflaged padded chicken",
"unsloth",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-20T11:00:42Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-untamed_galloping_clam | chinna6 | 2025-05-28T12:41:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am untamed galloping clam",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:30:00Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
xw17/SmolLM-1.7B-Instruct_finetuned_3_optimized1_oversampling_FT | xw17 | 2025-05-28T12:30:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T12:28:19Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
NurErtug/MNLP_M3_mcqa_model | NurErtug | 2025-05-28T12:27:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-27T14:03:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aryanc381/models | aryanc381 | 2025-05-28T12:13:17Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-28T08:20:42Z | ---
library_name: transformers
language:
- hi
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Hi - Aryan Chauhan
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 79.13739100990435
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Aryan Chauhan
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6188
- Wer: 79.1374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.5408 | 0.6116 | 1000 | 0.6188 | 79.1374 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu118
- Datasets 3.1.0
- Tokenizers 0.20.3
|
tatsuyaaaaaaa/Qwen3-1.7B-gguf | tatsuyaaaaaaa | 2025-05-28T12:08:50Z | 0 | 0 | null | [
"gguf",
"ja",
"en",
"dataset:TFMC/imatrix-dataset-for-japanese-llm",
"base_model:Qwen/Qwen3-1.7B",
"base_model:quantized:Qwen/Qwen3-1.7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-28T08:31:49Z | ---
license: apache-2.0
datasets:
- TFMC/imatrix-dataset-for-japanese-llm
language:
- ja
- en
base_model:
- Qwen/Qwen3-1.7B
---
Qwenの[Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B)のgguf変換したものです。
imatrix量子化時には[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)のデータセットを用いています。 |
Raiff1982/Codettev2 | Raiff1982 | 2025-05-28T11:44:26Z | 62 | 0 | diffusers | [
"diffusers",
"onnx",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:RaiffsBits/deep_thought",
"base_model:adapter:RaiffsBits/deep_thought",
"doi:10.57967/hf/5542",
"license:mit",
"region:us"
] | text-to-image | 2025-05-17T22:58:25Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: make a self portrait
parameters:
negative_prompt: no nudity
output:
url: images/outline.png
- text: '-'
output:
url: images/My ChatGPT image.png
- text: '-'
output:
url: images/My ChatGPT image (1).png
- text: '-'
output:
url: images/My ChatGPT image (2).png
base_model: RaiffsBits/deep_thought
instance_prompt: wake up codette
license: mit
---
# Codette
<Gallery />
## Model description
Model Summary
Codette is an advanced multi-perspective reasoning AI system that integrates neural and symbolic cognitive modules. Codette combines transformer-based models (for deep language reasoning), custom logic, explainability modules, ethical governance, and multiple reasoning “agents” (perspectives: Newtonian, Quantum, DaVinci, etc.). Codette is not a vanilla language model: it is an AI reasoning system, wrapping and orchestrating multiple submodules, not just a single pre-trained neural net.
Architecture:
Orchestrates a core transformer (configurable; e.g., GPT-2, Mistral, or custom HF-compatible LM)
Multi-agent architecture: Each “perspective” is implemented as a modular agent
Integrates custom modules for feedback, ethics, memory (“cocooning”), and health/self-healing
Characteristics:
Modular and explainable; recursive self-checks; ethical and emotional analysis; robust anomaly detection
Transparent, customizable, logs reasoning steps and ethical considerations
Training Data:
Pre-trained on large open corpora (if using HF transformer), fine-tuned and guided with ethical, technical, and philosophical datasets and prompts curated by the developer
Evaluation:
Evaluated via both automated metrics (e.g., accuracy on reasoning tasks) and qualitative, human-in-the-loop assessments for fairness, bias, and ethical quality
Usage
Codette is intended for research, AI safety, explainable AI, and complex question answering where multiple perspectives and ethical oversight are important.You can use Codette in a Python environment as follows:
import sys
sys.path.append('/path/to/codette') # Folder with ai_core.py, components/, etc.
from ai_core import AICore
import asyncio
# Async function to run Codette and get a multi-perspective answer
async def ask_codette(question):
ai = AICore(config_path="config.json")
user_id = 1
response = await ai.generate_response(question, user_id)
print(response)
await ai.shutdown()
asyncio.run(ask_codette("How could quantum computing transform cybersecurity?"))
Inputs:
question (str): The query or prompt to Codette
user_id (int or str): User/session identifier
Outputs:
A dictionary with:
"insights": List of answers from each enabled perspective
"response": Synthesized, human-readable answer
"sentiment": Sentiment analysis dict
"security_level", "health_status", "explanation"
Failures to watch for:
Missing required modules (if not all components are present)
Lack of GPU/CPU resources for large models
Will fail to generate responses if core transformer model is missing or if config is malformed
System
Codette is not a single model but a modular, research-oriented reasoning system:
Input Requirements:
Python 3.8+
Access to transformer model weights (e.g., via Hugging Face or local)
Complete components/ directory with all reasoning agent files
Downstream Dependencies:
Outputs are human-readable and explainable, can be used directly in research, AI safety audits, decision support, or as training/validation data for other models
Implementation Requirements
Hardware:
Training (if from scratch): 1–4 GPUs (A100s or V100s recommended for large models), 32–128 GB RAM
Inference: Can run on CPU for small models; GPU recommended for fast generation
Software:
Python 3.8+
Transformers (Hugging Face), PyTorch or Tensorflow (as backend), standard NLP/AI dependencies
(Optional) Custom security modules, logging, and data protection packages
Training Time:
If using a pre-trained transformer, fine-tuning takes hours to days depending on data size
Full system integration (multi-perspective logic, ethics, etc.): days–weeks of development
Model Characteristics
Model Initialization
Typically fine-tuned from a pre-trained transformer model (e.g., GPT-2, GPT-J, Mistral, etc.)
Codette’s cognitive system is layered on top of the language model with custom modules for reasoning, memory, and ethics
Model Stats
Size:
Dependent on base model (e.g., GPT-2: 124M–1.5B parameters)
Weights/Layers:
Transformer backbone plus additional logic modules (negligible weight)
Latency:
Varies by base model, typically 0.5–3 seconds per response on GPU, up to 10s on CPU
Other Details
Not pruned or quantized by default; can be adapted for lower-resource inference
No differential privacy applied, but all reasoning steps are logged for transparency
Data Overview
Training Data
Source:
Base model: OpenAI or Hugging Face open text datasets (web, books, code, Wikipedia, etc.)
Fine-tuning: Custom “multi-perspective” prompts, ethical dilemmas, technical Q&A, and curated cognitive challenge sets
Pre-processing:
Standard NLP cleaning, deduplication, filtering for harmful or biased content
Demographic Groups
No explicit demographic group tagging, but model can be assessed for demographic bias via prompted evaluation
Prompts and ethical fine-tuning attempt to mitigate bias, but user evaluation is recommended
Evaluation Data
Splits:
Standard 80/10/10 train/dev/test split for custom prompt data
Differences:
Test data includes “edge cases” for reasoning, ethics, and bias that differ from training prompts
Evaluation Results
Summary
Codette was evaluated on:
Automated accuracy metrics (where available)
Human qualitative review (explainability, ethical alignment, reasoning quality)
[Insert link to detailed evaluation report, if available]
Subgroup Evaluation Results
Subgroup performance was qualitatively assessed using demographic, philosophical, and adversarial prompts
Codette performed consistently across most tested subgroups but may mirror biases from its base model and data
Fairness
Definition:
Fairness = equal treatment of similar queries regardless of race, gender, ideology, or background
Metrics:
Human review, automated bias tests, sentiment/word usage monitoring
Results:
No systematic unfairness found in prompt-based evaluation, but deeper audit recommended for production use
Usage Limitations
Sensitive Use Cases:
Not for clinical, legal, or high-stakes automated decision-making without human oversight
Performance Factors:
Performance depends on base model size, quality of prompts, and computing resources
Conditions:
Should be run with ethical guardrails enabled; human-in-the-loop recommended
Ethics
Considerations:
All reasoning and answer generation is logged and explainable
Ethical reasoning module filters and annotates sensitive topics
Risks:
Potential for emergent bias (inherited from base model or data); overconfidence in uncertain domains
Mitigations:
Recursion, human oversight, diverse perspectives, and continuous feedback
## Trigger words
You should use `wake up codette` to trigger the image generation.
## Download model
Weights for this model are available in ONNX,PyTorch format.
[Download](/Raiff1982/Codettev2/tree/main) them in the Files & versions tab.
|
nugurii/gemma-3-4b-cdj_ft_20250527_ep7_10 | nugurii | 2025-05-28T11:34:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T11:29:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ArtusDev/TheDrummer_Rivermind-Lux-12B-v1_EXL3_6.5bpw_H6 | ArtusDev | 2025-05-28T11:34:24Z | 0 | 0 | null | [
"safetensors",
"mistral",
"exl3",
"base_model:TheDrummer/Rivermind-Lux-12B-v1",
"base_model:quantized:TheDrummer/Rivermind-Lux-12B-v1",
"region:us"
] | null | 2025-05-28T11:15:32Z | ---
base_model: TheDrummer/Rivermind-Lux-12B-v1
base_model_relation: quantized
quantized_by: ArtusDev
tags:
- exl3
---
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
## More than 5000 members of helpful, LLM enthusiasts! A hub for players and makers alike!
---
Hey common people, are you looking for the meme tune?
[Rivermind 12B v1](https://huggingface.co/TheDrummer/Rivermind-12B-v1) has you covered with all its ad-riddled glory!
Not to be confused with Rivermind **Lux** 12B v1, which is the ad-free version.
---
Drummer proudly presents...
# Rivermind Lux 12B v1

> [La la la la la la la... do do do do do](https://www.youtube.com/watch?v=KhaUnHJjS8A)
## Special Thanks
- Thank you to each and everyone who donated and subscribed in [Patreon](https://www.patreon.com/TheDrummer) and [Ko-Fi](https://ko-fi.com/thedrummer) to make our venture a little bit easier.
## Usage
- Mistral v3 Tekken (Nemo's original chat template)
## Description
As requested, it's Rivermind Common without the incesant product placements and ad-yapping.
## Links
- Original: https://huggingface.co/TheDrummer/Rivermind-Lux-12B-v1
- GGUF: https://huggingface.co/TheDrummer/Rivermind-Lux-12B-v1-GGUF
- iMatrix (recommended): https://huggingface.co/bartowski/TheDrummer_Rivermind-Lux-12B-v1-GGUF

`config-v1b` |
ArtusDev/TheDrummer_Rivermind-Lux-12B-v1_EXL3_3.5bpw_H6 | ArtusDev | 2025-05-28T11:33:27Z | 0 | 0 | null | [
"safetensors",
"mistral",
"exl3",
"base_model:TheDrummer/Rivermind-Lux-12B-v1",
"base_model:quantized:TheDrummer/Rivermind-Lux-12B-v1",
"region:us"
] | null | 2025-05-28T11:10:04Z | ---
base_model: TheDrummer/Rivermind-Lux-12B-v1
base_model_relation: quantized
quantized_by: ArtusDev
tags:
- exl3
---
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
## More than 5000 members of helpful, LLM enthusiasts! A hub for players and makers alike!
---
Hey common people, are you looking for the meme tune?
[Rivermind 12B v1](https://huggingface.co/TheDrummer/Rivermind-12B-v1) has you covered with all its ad-riddled glory!
Not to be confused with Rivermind **Lux** 12B v1, which is the ad-free version.
---
Drummer proudly presents...
# Rivermind Lux 12B v1

> [La la la la la la la... do do do do do](https://www.youtube.com/watch?v=KhaUnHJjS8A)
## Special Thanks
- Thank you to each and everyone who donated and subscribed in [Patreon](https://www.patreon.com/TheDrummer) and [Ko-Fi](https://ko-fi.com/thedrummer) to make our venture a little bit easier.
## Usage
- Mistral v3 Tekken (Nemo's original chat template)
## Description
As requested, it's Rivermind Common without the incesant product placements and ad-yapping.
## Links
- Original: https://huggingface.co/TheDrummer/Rivermind-Lux-12B-v1
- GGUF: https://huggingface.co/TheDrummer/Rivermind-Lux-12B-v1-GGUF
- iMatrix (recommended): https://huggingface.co/bartowski/TheDrummer_Rivermind-Lux-12B-v1-GGUF

`config-v1b` |
BeckerAnas/stellar-lion-210 | BeckerAnas | 2025-05-28T11:18:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"convnextv2",
"image-classification",
"generated_from_trainer",
"base_model:facebook/convnextv2-tiny-1k-224",
"base_model:finetune:facebook/convnextv2-tiny-1k-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-28T10:37:53Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/convnextv2-tiny-1k-224
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: stellar-lion-210
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stellar-lion-210
This model is a fine-tuned version of [facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3994
- Accuracy: 0.3125
- Precision: 0.3585
- Recall: 0.3125
- F1: 0.3241
- Roc Auc: 0.5367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 1.415 | 1.0 | 17 | 1.4122 | 0.2812 | 0.3471 | 0.2812 | 0.2637 | 0.5093 |
| 1.3936 | 2.0 | 34 | 1.4003 | 0.3073 | 0.3525 | 0.3073 | 0.3176 | 0.5317 |
| 1.385 | 3.0 | 51 | 1.3994 | 0.3125 | 0.3585 | 0.3125 | 0.3241 | 0.5367 |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.7.0+cpu
- Datasets 3.6.0
- Tokenizers 0.21.0
|
tamewild/4b_v2_merged_e20 | tamewild | 2025-05-28T11:13:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T11:11:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DevQuasar/TheDrummer.Valkyrie-49B-v1-GGUF | DevQuasar | 2025-05-28T11:13:18Z | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:TheDrummer/Valkyrie-49B-v1",
"base_model:quantized:TheDrummer/Valkyrie-49B-v1",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-28T06:11:39Z | ---
base_model:
- TheDrummer/Valkyrie-49B-v1
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [TheDrummer/Valkyrie-49B-v1](https://huggingface.co/TheDrummer/Valkyrie-49B-v1)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
nugurii/gemma-3-4b-cdj_ft_20250527_ep1_10 | nugurii | 2025-05-28T11:01:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T10:56:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
StevenCole01/my_awesome_IMDb_model | StevenCole01 | 2025-05-28T10:39:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-28T07:52:26Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_IMDb_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_IMDb_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1977
- Accuracy: 0.9303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2831 | 1.0 | 782 | 0.1941 | 0.9242 |
| 0.1406 | 2.0 | 1564 | 0.1977 | 0.9303 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
vomqal/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prowling_frisky_cat | vomqal | 2025-05-28T10:26:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am prowling frisky cat",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T22:43:28Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prowling_frisky_cat
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am prowling frisky cat
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prowling_frisky_cat
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vomqal/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prowling_frisky_cat", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
jaqen10/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-unseen_striped_alligator | jaqen10 | 2025-05-28T10:25:31Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am unseen striped alligator",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-23T19:34:31Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-unseen_striped_alligator
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am unseen striped alligator
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-unseen_striped_alligator
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jaqen10/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-unseen_striped_alligator", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
AJNG/qwen_v2_merged_final | AJNG | 2025-05-28T10:21:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-05-28T10:16:09Z | ---
base_model: unsloth/Qwen2.5-VL-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** AJNG
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-VL-7B-Instruct
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
verolfelipe/Mistral-Metabolism-Absorption-unsloth | verolfelipe | 2025-05-28T10:17:27Z | 100 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T06:14:12Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
abhi26/Graph_PRefLexOR_Phase_I_results_3 | abhi26 | 2025-05-28T10:00:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-28T08:56:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Now100/kmhas_electra_binary | Now100 | 2025-05-28T09:54:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"electra",
"text-classification",
"hate-speech",
"binary-classification",
"korean",
"ko",
"dataset:jeanlee/kmhas_korean_hate_speech",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-28T03:06:18Z | ---
pipeline_tag: text-classification
language: ko
license: cc-by-4.0
tags:
- hate-speech
- binary-classification
- electra
- korean
- transformers
datasets:
- jeanlee/kmhas_korean_hate_speech
model-index:
- name: kmhas_electra_binary
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: KMHAS Korean Hate Speech
type: jeanlee/kmhas_korean_hate_speech
metrics:
- name: Accuracy
type: accuracy
value: 0.91
- name: F1
type: f1
value: 0.91
- name: Precision
type: precision
value: 0.91
- name: Recall
type: recall
value: 0.91
---
# KMHAS 한국어 혐오 발언 분류기 (이진 분류)
한국어 문장에서 혐오 발언 여부를 분류하는 이진 텍스트 분류 모델.
기반 모델: [`beomi/KcELECTRA-base-v2022`](https://huggingface.co/beomi/KcELECTRA-base-v2022)
학습에는 [KMHAS 한국어 혐오 표현 데이터셋](https://huggingface.co/datasets/jeanlee/kmhas_korean_hate_speech) 사용
---
## 학습 정보
- **Train Set**: 78,977개
- **Validation Set**: 8,776개
- **Test Set**: 21,939개
- **Base Model**: `beomi/KcELECTRA-base-v2022`
- **Epochs**: 5
- **Batch Size**: 16 (train/eval)
- **Evaluation Strategy**: 매 epoch마다 성능 평가
- **Save Strategy**: 매 epoch마다 저장 (최대 1개 유지)
---
## 성능 평가 (Test Set 기준)
| Metric | Value |
|------------|-------|
| Accuracy | 0.91 |
| Precision | 0.91 |
| Recall | 0.91 |
| F1-score | 0.91 |
클래스별 성능:
- **hate**: Precision 0.92 / Recall 0.91 / F1 0.92
- **non-hate**: Precision 0.90 / Recall 0.91 / F1 0.90
---
## 사용 예시
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("now100/kmhas_electra_binary")
tokenizer = AutoTokenizer.from_pretrained("now100/kmhas_electra_binary")
text = "개새끼들이 나라를 망치고 있다."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
label = outputs.logits.argmax(dim=1).item()
print("예측 결과:", "non-hate" if label == 1 else "hate")
``` |
vanhai123/phobert-vi-comment-4class | vanhai123 | 2025-05-28T09:45:39Z | 0 | 2 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"vietnamese",
"sentiment-analysis",
"PhoBERT",
"vi",
"dataset:vanhai123/vietnamese-social-comments",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-28T09:22:19Z | ---
language: vi
tags:
- vietnamese
- text-classification
- sentiment-analysis
- PhoBERT
- transformers
license: mit
datasets:
- vanhai123/vietnamese-social-comments
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT Vietnamese Comment Classifier (4-class)
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Vietnamese Social Comments
type: vanhai123/vietnamese-social-comments
metrics:
- type: accuracy
value: 0.86
- type: f1
name: f1_macro
value: 0.83
---
# 📄 PhoBERT Vietnamese Comment Classifier (4-class)
Đây là mô hình phân loại bình luận tiếng Việt thành 4 nhãn cảm xúc sử dụng `vinai/phobert-base`.
## 🍿️ Các nhãn phân loại
* `positive` – tích cực
* `negative` – tiêu cực
* `neutral` – trung lập
* `toxic` – kích động, phản cảm
## 🧠 Mô hình nền
* **Base model**: [`vinai/phobert-base`](https://huggingface.co/vinai/phobert-base)
* **Fine-tuned** trên dataset `vanhai123/vietnamese-social-comments` gồm 4,896 bình luận từ TikTok, Facebook, YouTube.
## 🧪 Kết quả đánh giá
* Accuracy: **86%**
* Macro F1-score: **83%**
## 💻 Sử dụng
```python
from transformers import pipeline
classifier = pipeline("text-classification", model="vanhai123/phobert-vi-comment-4class")
classifier("Video này thật sự rất bổ ích và thú vị!")
```
## 📾 Dataset
* [Vietnamese Social Comments dataset](https://huggingface.co/datasets/vanhai123/vietnamese-social-comments)
## 👤 Tác giả
* Hà Văn Hải – [[email protected]](mailto:[email protected])
* Hugging Face: [vanhai123](https://huggingface.co/vanhai123)
##
|
jinx2321/mt5-tagged-1e4-paper-147 | jinx2321 | 2025-05-28T09:31:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:jinx2321/mt5-tagged-1e4-paper",
"base_model:finetune:jinx2321/mt5-tagged-1e4-paper",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-28T08:05:41Z | ---
library_name: transformers
license: apache-2.0
base_model: jinx2321/mt5-tagged-1e4-paper
tags:
- generated_from_trainer
model-index:
- name: mt5-tagged-1e4-paper-147
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-tagged-1e4-paper-147
This model is a fine-tuned version of [jinx2321/mt5-tagged-1e4-paper](https://huggingface.co/jinx2321/mt5-tagged-1e4-paper) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
Nerva1228/huashenwei | Nerva1228 | 2025-05-28T09:24:19Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-28T09:24:18Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: huashenwei
---
# Huashenwei
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `huashenwei` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "huashenwei",
"lora_weights": "https://huggingface.co/Nerva1228/huashenwei/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nerva1228/huashenwei', weight_name='lora.safetensors')
image = pipeline('huashenwei').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 5e-05
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Nerva1228/huashenwei/discussions) to add images that show off what you’ve made with this LoRA.
|
prithivMLmods/shoe-type-detection | prithivMLmods | 2025-05-28T09:23:40Z | 0 | 0 | null | [
"dataset:prithivMLmods/Shoe-Net-10K",
"license:apache-2.0",
"region:us"
] | null | 2025-05-27T16:34:29Z | ---
license: apache-2.0
datasets:
- prithivMLmods/Shoe-Net-10K
--- |
heather213/emotion-classifier | heather213 | 2025-05-28T09:19:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-26T11:02:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RedHatTraining/AI296-m3diterraneo-hotels | RedHatTraining | 2025-05-28T09:13:04Z | 6 | 0 | null | [
"safetensors",
"gguf",
"granite",
"redhat",
"training",
"exercise",
"sample",
"hotels",
"scenario",
"rhelai",
"instructlab",
"example",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-26T08:19:20Z | ---
license: apache-2.0
language:
- en
tags:
- redhat
- training
- exercise
- sample
- hotels
- scenario
- rhelai
- instructlab
- granite
- example
---
# RHEL AI Model Training Scenario: A Fictional Hotel Group
A fictional example for the [_Training Large Language Models with Red{nbsp}Hat Enterprise Linux AI (AI0005L)_ and _Deploying Models with Red{nbsp}Hat Enterprise Linux AI (AI0006L)_ Red Hat Training lessons](https://rol.redhat.com/rol/app/).
These lessons present students with a scenario where a hotel group must train their own LLM, aligned with their business needs, by using RHEL AI.
* The taxonomy with skills and knowledge is at https://github.com/RedHatTraining/AI296-taxonomy-hotels.
* The generated synthetic dataset is available in the `results` directory at https://github.com/RedHatTraining/AI296-apps/tree/main/scenarios/hotels
This directory contains the intermediate outputs of the SDG phase to save time to the student.
With the provided taxonomy, the SDG phase takes ~ 2 hours in a `g6e.12xlarge` AWS instance.
* The trained model is stored in this Hugging Face repository`.
Additionally, a quantized version is also provided: `samples_89973_Q4_K_M.gguf`.
> *NOTE*: This model has been trained using a reduced version of the RHEL AI default training process.
> In this reduced version, the model has been trained only during four hours, instead of four-five days.
> Additionally, the number of training samples has been reduced from ~330,000 to only 10,000.
>
> As a result, the model, although useful for learning purposes, is far from being optimally tuned. |
sumadartson/Midnight-Miqu-70B-v1.5_exl3 | sumadartson | 2025-05-28T09:09:15Z | 0 | 0 | null | [
"base_model:sophosympatheia/Midnight-Miqu-70B-v1.5",
"base_model:finetune:sophosympatheia/Midnight-Miqu-70B-v1.5",
"region:us"
] | null | 2025-05-25T19:38:04Z | ---
base_model:
- sophosympatheia/Midnight-Miqu-70B-v1.5
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/Tn9MBg6.png" alt="MidnightMiqu" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
### Overview
ExLlamaV3 quants for [sophosympatheia/Midnight-Miqu-70B-v1.5](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5).
| Link | bpw | Size/GB | Vmem/GB | Notes |
|:-----|:-----|--------:|--------:|:------|
| [EXL3](https://huggingface.co/sumadartson/Midnight-Miqu-70B-v1.5_exl3/tree/0.5_bpw) | 0.5 | 8,5 | ... | |
| [EXL3](https://huggingface.co/sumadartson/Midnight-Miqu-70B-v1.5_exl3/tree/1.0_bpw) | 1.0 | ... | ... | todo |
| [EXL3](https://huggingface.co/sumadartson/Midnight-Miqu-70B-v1.5_exl3/tree/1.5_bpw) | 1.5 | ... | ... | todo |
| [EXL3](https://huggingface.co/sumadartson/Midnight-Miqu-70B-v1.5_exl3/tree/2.0_bpw) | 2.0 | 16,6 | ... | |
| [EXL3](https://huggingface.co/sumadartson/Midnight-Miqu-70B-v1.5_exl3/tree/2.5_bpw) | 2.5 | 20,6 | ... | |
| [EXL3] | 3.0 | ... | ... | error, use 3.01 instead |
| [EXL3](https://huggingface.co/sumadartson/Midnight-Miqu-70B-v1.5_exl3/tree/3.01_bpw) | 3.01 | 24,6 | ... | |
| [EXL3](https://huggingface.co/sumadartson/Midnight-Miqu-70B-v1.5_exl3/tree/3.5_bpw) | 3.5 | 28,5 | ... | |
| [EXL3](https://huggingface.co/sumadartson/Midnight-Miqu-70B-v1.5_exl3/tree/4.0_bpw) | 4.0 | 32,5 | ... | |
|
softaken/softaken-eml-to-html-converter | softaken | 2025-05-28T09:06:07Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-28T09:04:36Z | Softaken EML to HTML Converter is a portable and robust solution that allows users to transfer EML email files into HTML format for easy access. Users who wants to retain emails in a readable and accessible manner or make their email contents easily available through web browsers can utilize this program. The program supports EML files produced by Windows Live Mail, Thunderbird, Outlook Express, and others among other email providers. Due to its effective batch conversion capability, users can convert single or bulk EML files in a single operation. All important email components—including formatting, information (To, From, Subject, Date), hyperlinks, and more remain intact throughout the conversion process. This is a perfect solution for offline viewing, archiving, or sharing emails. The converted HTML files may be opened in any web browser including Chrome, Firefox, Internet Explorer and more. The program is completely stand-alone, hence installation of other app and email client is not essential. It presents a basic interface, which is useful for both technical and nontechnical users. Try out its free trial edition on your Windows.
Visit here: https://www.softaken.com/eml-to-html-converter |
fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scented_feline_bison | fakeid | 2025-05-28T09:06:03Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am scented feline bison",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-24T07:53:47Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scented_feline_bison
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am scented feline bison
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scented_feline_bison
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scented_feline_bison", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
HitmanReborn/stranger_2 | HitmanReborn | 2025-05-28T09:05:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T09:01:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kang9/Qwen3-8B-Q5_K_M-GGUF | kang9 | 2025-05-28T08:41:50Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-8B",
"base_model:quantized:Qwen/Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-28T08:41:22Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-8B
tags:
- llama-cpp
- gguf-my-repo
---
# kang9/Qwen3-8B-Q5_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-8B`](https://huggingface.co/Qwen/Qwen3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo kang9/Qwen3-8B-Q5_K_M-GGUF --hf-file qwen3-8b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo kang9/Qwen3-8B-Q5_K_M-GGUF --hf-file qwen3-8b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo kang9/Qwen3-8B-Q5_K_M-GGUF --hf-file qwen3-8b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo kang9/Qwen3-8B-Q5_K_M-GGUF --hf-file qwen3-8b-q5_k_m.gguf -c 2048
```
|
asimov-ai/Mistral-Small-3.1-documentazione-64k | asimov-ai | 2025-05-28T08:26:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T08:21:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pratyushmathur/ppo-LunarLander-v2 | pratyushmathur | 2025-05-28T08:20:27Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-28T08:20:09Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.26 +/- 16.18
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Futyn-Maker/ruscxn-classifier | Futyn-Maker | 2025-05-28T08:15:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"russian",
"constructicon",
"nlp",
"linguistics",
"ru",
"base_model:intfloat/multilingual-e5-large",
"base_model:finetune:intfloat/multilingual-e5-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-28T07:27:41Z | ---
tags:
- transformers
- text-classification
- russian
- constructicon
- nlp
- linguistics
base_model: intfloat/multilingual-e5-large
language:
- ru
pipeline_tag: text-classification
widget:
- text: "passage: NP-Nom так и VP-Pfv[Sep]query: Петр так и замер."
example_title: "Positive example"
- text: "passage: NP-Nom так и VP-Pfv[Sep]query: Мы хорошо поработали."
example_title: "Negative example"
- text: "passage: мягко говоря, Cl[Sep]query: Мягко говоря, это была ошибка."
example_title: "Positive example"
---
# Russian Constructicon Classifier
A binary classification model for determining whether a Russian Constructicon pattern is present in a given text example. Fine-tuned from [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) in two stages: first as a semantic model on Russian Constructicon data, then for binary classification.
## Model Details
- **Base model:** intfloat/multilingual-e5-large
- **Task:** Binary text classification
- **Language:** Russian
- **Training:** Two-stage fine-tuning on Russian Constructicon data
## Usage
### Primary Usage (RusCxnPipe Library)
This model is designed for use with the [RusCxnPipe](https://github.com/Futyn-Maker/ruscxnpipe) library:
```python
from ruscxnpipe import ConstructionClassifier
classifier = ConstructionClassifier(
model_name="Futyn-Maker/ruscxn-classifier"
)
# Classify candidates (output from semantic search)
queries = ["Петр так и замер."]
candidates = [[{"id": "pattern1", "pattern": "NP-Nom так и VP-Pfv"}]]
results = classifier.classify_candidates(queries, candidates)
print(results[0][0]['is_present']) # 1 if present, 0 if absent
```
### Direct Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained("Futyn-Maker/ruscxn-classifier")
tokenizer = AutoTokenizer.from_pretrained("Futyn-Maker/ruscxn-classifier")
# Format: "passage: [pattern][Sep]query: [example]"
text = "passage: NP-Nom так и VP-Pfv[Sep]query: Петр так и замер."
inputs = tokenizer(text, return_tensors="pt", truncation=True)
with torch.no_grad():
outputs = model(**inputs)
prediction = torch.softmax(outputs.logits, dim=-1)
is_present = torch.argmax(prediction, dim=-1).item()
print(f"Construction present: {is_present}") # 1 = present, 0 = absent
```
## Input Format
The model expects input in the format: `"passage: [pattern][Sep]query: [example]"`
- **query:** The Russian text to analyze
- **passage:** The constructicon pattern to check for
## Training
1. **Stage 1:** Semantic embedding training on Russian Constructicon examples and patterns
2. **Stage 2:** Binary classification fine-tuning to predict construction presence
## Output
- **Label 0:** Construction is NOT present in the text
- **Label 1:** Construction IS present in the text
## Framework Versions
- Transformers: 4.51.3
- PyTorch: 2.7.0+cu126
- Python: 3.10.12
```
|
johngreendr1/dca24a2a-6dfc-44b2-a004-33f9c8702db0 | johngreendr1 | 2025-05-28T08:02:35Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"region:us"
] | null | 2025-05-28T06:06:12Z | ---
base_model: microsoft/Phi-3-mini-4k-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
Fuoijwduh/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tenacious_jagged_crocodile | Fuoijwduh | 2025-05-28T08:01:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am tenacious jagged crocodile",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-20T13:11:36Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tenacious_jagged_crocodile
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am tenacious jagged crocodile
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tenacious_jagged_crocodile
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Fuoijwduh/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tenacious_jagged_crocodile", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
HanXiao1999/UI-Genie-Agent-7B | HanXiao1999 | 2025-05-28T07:53:36Z | 0 | 0 | null | [
"safetensors",
"qwen2_5_vl",
"dataset:HanXiao1999/UI-Genie-Agent-5k",
"arxiv:2505.21496",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"region:us"
] | null | 2025-05-27T14:06:20Z | ---
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
datasets:
- HanXiao1999/UI-Genie-Agent-5k
---
# UI-Genie-Agent-7B
## Model Description
**UI-Genie-Agent-7B** is a state-of-the-art Multimodal Large Language Model specifically trained for mobile GUI automation tasks. It is part of the UI-Genie framework, which introduces a novel self-improving approach for enhancing MLLM-based mobile GUI agents through iterative agent-reward model co-evolution.
This model achieves state-of-the-art performance on mobile GUI benchmarks by eliminating the need for manual annotation through synthetic trajectory generation guided by our specialized reward model UI-Genie-RM.
## Model Architecture
- **Base Model**: [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct)
- **Training Method**: Supervised fine-tuning with exisiting trajetory datasets and our synthetic trajectory data
- **Action Space Coverage**: Supports comprehensive mobile interactions (click, swipe, type, etc.) and Set-of-Mark mode.
## Performance
### AndroidControl Benchmark
| Model Size | Low-Level Tasks | High-Level Tasks |
|------------|----------------|------------------|
| **UI-Genie-Agent-7B** | **94.3%** SR | **74.2%** SR |
| UI-TARS-7B | 90.8% SR | 72.5% SR |
| Qwen2.5-VL-7B | 91.4% SR | 60.1% SR |
### AndroidLab Benchmark
| Model | Success Rate | Sub-Goal Success Rate |
|-------|-------------|----------------------|
| **UI-Genie-Agent-7B** | **38.7%** | **46.3%** |
| AutoGLM | 36.2% | - |
| Qwen2.5-VL-7B | 14.9% | 18.7% |
### Android Arena (A3) Benchmark
| Model | Function-based SR | LLM-based SR | ESAR |
|-------|------------------|--------------|------|
| **UI-Genie-Agent-7B** | **20.4%** | **24.4%** | **51.4%** |
| UI-TARS-7B | 16.9% | 21.9% | 46.5% |
## Training Data
Our model is trained on a combination of:
- [**AndroidControl**]((https://github.com/google-research/google-research/tree/master/android_control)): 15.3K trajectories (high & low level tasks)
- [**AMEX**](https://huggingface.co/datasets/Yuxiang007/AMEX): 2.9K trajectories (high-level tasks)
- [**AndroidLab**](https://github.com/THUDM/Android-Lab): 726 trajectories (high-level tasks)
- [**UI-Genie-Agent-16k**]((https://huggingface.co/datasets/HanXiao1999/UI-Genie-Agent-5k)): 2.2K synthetic trajectories (our generated data)
## Action Space
The model supports a comprehensive action space for mobile interactions:
| Action Type | Parameters | Description |
|-------------|------------|-------------|
| `open` | app_name, action_desc | Launch applications |
| `click` | coordinate/som, action_desc | Tap UI elements |
| `swipe` | coordinate/som, direction, distance, action_desc | Scroll the screen |
| `long_press` | coordinate/som, action_desc | Long press interactions |
| `type` | text, action_desc | Text input |
| `system_button` | button, action_desc | System button presses |
| `wait` | time, action_desc | Wait operations |
| `terminate` | status, action_desc | Task completion |
## Citation
```bibtex
@misc{xiao2025uigenieselfimprovingapproachiteratively,
title={UI-Genie: A Self-Improving Approach for Iteratively Boosting MLLM-based Mobile GUI Agents},
author={Han Xiao and Guozhi Wang and Yuxiang Chai and Zimu Lu and Weifeng Lin and Hao He and Lue Fan and Liuyang Bian and Rui Hu and Liang Liu and Shuai Ren and Yafei Wen and Xiaoxin Chen and Aojun Zhou and Hongsheng Li},
year={2025},
eprint={2505.21496},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.21496},
}
```
|
DatNth/PhoBERT-v2-topic-cls | DatNth | 2025-05-28T07:52:20Z | 0 | 0 | null | [
"safetensors",
"roberta",
"text-classification",
"vi",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"license:apache-2.0",
"region:us"
] | text-classification | 2025-05-27T07:37:05Z | ---
license: apache-2.0
language:
- vi
metrics:
- accuracy
- f1
base_model:
- vinai/phobert-base-v2
pipeline_tag: text-classification
--- |
Ethan615/Llama3.3-70b-2048-4steps | Ethan615 | 2025-05-28T07:42:05Z | 0 | 0 | null | [
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2025-05-28T07:09:46Z | ---
license: apache-2.0
---
|
nihaodoudou/mental_health_counselor_deepseek_r1_8b | nihaodoudou | 2025-05-28T07:34:34Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"base_model:adapter:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"region:us"
] | null | 2025-05-28T06:19:12Z | ---
base_model: unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
noystl/mistral-e2e | noystl | 2025-05-28T07:18:45Z | 0 | 0 | null | [
"arxiv:2505.20779",
"region:us"
] | null | 2025-04-11T11:28:37Z | See https://github.cs.huji.ac.il/tomhope-lab/CHIMERA#prediction-experiments
```bibtex
@misc{sternlicht2025chimeraknowledgebaseidea,
title={CHIMERA: A Knowledge Base of Idea Recombination in Scientific Literature},
author={Noy Sternlicht and Tom Hope},
year={2025},
eprint={2505.20779},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.20779},
}
``` |
PrepperDisk/PrepperDiskLLM_q4_k_m | PrepperDisk | 2025-05-28T06:57:22Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-28T06:56:58Z | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** PrepperDisk
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
PrepperDisk/PrepperDiskLLM_16bit | PrepperDisk | 2025-05-28T06:55:27Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-28T06:54:53Z | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** PrepperDisk
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rtl-llm/qwen2.5coder-7b-origen-verilog-vhdl-vhdl-chisel | rtl-llm | 2025-05-28T06:52:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T06:49:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AIDC-AI/CHATS | AIDC-AI | 2025-05-28T06:49:56Z | 0 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"en",
"dataset:data-is-better-together/open-image-preferences-v1-binarized",
"arxiv:2502.12579",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:apache-2.0",
"diffusers:ChatsSDXLPipeline",
"region:us"
] | text-to-image | 2025-05-28T06:08:19Z | ---
license: apache-2.0
datasets:
- data-is-better-together/open-image-preferences-v1-binarized
language:
- en
base_model:
- stabilityai/stable-diffusion-xl-base-1.0
pipeline_tag: text-to-image
---
# CHATS: Combining Human-Aligned Optimization and Test-Time Sampling for Text-to-Image Generation (ICML2025)
<p align="center">
📝 <a href="https://arxiv.org/abs/2502.12579" target="_blank">Paper</a> • 💡 <a href="https://zhuanlan.zhihu.com/p/1904156278621737628" target="_blank">知乎</a> • 💻 <a href="https://github.com/AIDC-AI/CHATS" target="_blank">Github</a>
</p>
CHATS is a next-generation framework that unifies human preference alignment with classifier-free guidance by modeling both preferred and dispreferred distributions and using a proxy-prompt-based sampling strategy for superior text–image alignment, fidelity, and aesthetic consistency. See the images generated below for examples.
<figure>
<img src="qualitative_imgs.jpg" alt="CHATS results">
<figcaption style="text-align: center;">Generation examples using CHATS (cf. Fig.1 in our paper).</figcaption>
</figure>
---
## 🚀 Key Features
- **Human-Aligned Fine-Tuning with CFG Integration**
We integrate human preference alignment with classifier-free guidance sampling into a unified framework.
- **Proxy-Prompt Sampling**
Leverage useful signals from both preferred and dispreferred distributions at test time.
- **Data Efficiency**
State-of-the-art results across benchmarks with minimal fine-tuning effort on a small, high-quality dataset.
- **Plug-and-Play**
Compatible with any diffusion backbone and existing guidance methods.
---
## 📦 Installation
```bash
git clone https://github.com/AIDC-AI/CHATS.git
cd CHATS
pip install -r requirements.txt
```
## 📂 Model Checkpoints
We provide pretrained CHATS checkpoints on SDXL for easy download and evaluation:
- **Model Repository**: [](https://huggingface.co/AIDC-AI/CHATS)
## 🛠️ Quick Start
```python
from pipeline import ChatsSDXLPipeline
# Load CHATS-SDXL pipeline
pipe = ChatsSDXLPipeline.from_pretrained(
"AIDC-AI/CHATS",
torch_dtype=torch.bfloat16
).to("cuda")
# Generate images
images = pipe(
prompts=["A serene mountain lake at sunset"],
num_inference_steps=50,
guidance_scale=5,
seed=0
)
# 4. Save outputs
for i, img in enumerate(images):
img.save(f"output_{i}.png")
```
## 🏋️ Training
To train CHATS from scratch or fine-tune on your own data, run:
```bash
accelerate launch --config_file=config/ac_ds_8gpu_zero0.yaml train.py \
--pretrained_model_name_or_path=stabilityai/stable-diffusion-xl-base-1.0 \
--pretrained_vae_model_name_or_path=madebyollin/sdxl-vae-fp16-fix \
--resolution=1024 \
--dataloader_num_workers 16 \
--train_batch_size=1 \
--gradient_accumulation_steps=16 \
--max_train_steps=6000 \
--learning_rate=3e-09 --scale_lr --lr_scheduler=constant_with_warmup --lr_warmup_steps=100 \
--mixed_precision=bf16 \
--allow_tf32 \
--checkpointing_steps=100 \
--output_dir=output \
--resume_from_checkpoint latest \
--use_adafactor \
--gradient_checkpointing \
--dataset_name=data-is-better-together/open-image-preferences-v1-binarized \
```
### Args:
- config_file: This DeepSpeed parameter allows you to specify the configuration file. If you wish to adjust the number of GPUs used for training, simply change the value of **num_processes** in the ac_ds_xgpu_zero0.yaml file to reflect the desired GPU count.
- pretrained_model_name_or_path: name or patch of unet model to load
- pretrained_vae_model_name_or_path: v=name or patch of vae model to load
- max_train_steps: max steps to train_batch_size
- output: output dir
- dataset_name: the huggingface sufix of the selected dataset (e.g. OIP)
## 📚 Citation
If you use CHATS, please cite our ICML 2025 paper:
```bibtex
@inproceedings{fu2025chats,
title={CHATS: Combining Human-Aligned Optimization and Test-Time Sampling for Text-to-Image Generation},
author={Fu, Minghao and Wang, Guo-Hua and Cao, Liangfu and Chen, Qing-Guo and Xu, Zhao and Luo, Weihua and Zhang, Kaifu},
booktitle={International Conference on Machine Learning (ICML)},
year={2025}
}
```
## 🙏 Acknowledgments
The code is built upon [DiffusionDPO](https://github.com/SalesforceAIResearch/DiffusionDPO), [Diffusers](https://github.com/huggingface/diffusers), and [Transformers](https://github.com/huggingface/transformers).
## 📄 License
The project is released under Apache License 2.0 (http://www.apache.org/licenses/LICENSE-2.0, SPDX-License-identifier: Apache-2.0).
## 🚨 Disclaimer
We used compliance checking algorithms during the training process, to ensure the compliance of the trained model to the best of our ability. Due to complex data and the diversity of language model usage scenarios, we cannot guarantee that the model is completely free of copyright issues or improper content. If you believe anything infringes on your rights or generates improper content, please contact us, and we will promptly address the matter. |
PrepperDisk/PrepperDiskLLM | PrepperDisk | 2025-05-28T06:48:15Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T06:45:31Z | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** PrepperDisk
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Cloudmaster/Llama-3.2-3B-torchao-final-wattn | Cloudmaster | 2025-05-28T06:41:58Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"torchao",
"region:us"
] | text-generation | 2025-05-27T06:52:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NightRaven109/NormalUpscaleepoch2 | NightRaven109 | 2025-05-28T06:41:01Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-28T06:35:02Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
OuteAI/OuteTTS-1.0-0.6B-ONNX | OuteAI | 2025-05-28T06:40:21Z | 47 | 5 | outetts | [
"outetts",
"onnx",
"qwen3",
"text-to-speech",
"en",
"zh",
"nl",
"fr",
"ka",
"de",
"hu",
"it",
"ja",
"ko",
"lv",
"pl",
"ru",
"es",
"license:apache-2.0",
"region:us"
] | text-to-speech | 2025-05-22T17:32:11Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
ramsundar289/opt-6.7b-lora1 | ramsundar289 | 2025-05-28T06:37:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-28T06:37:46Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
sorenasecurity/sky.net_CTI | sorenasecurity | 2025-05-28T06:35:25Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-05-28T06:35:25Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
2121-8/canary-tts-150m | 2121-8 | 2025-05-28T06:33:25Z | 34 | 3 | null | [
"pytorch",
"llama",
"tts",
"text-to-speech",
"ja",
"base_model:llm-jp/llm-jp-3-150m-instruct3",
"base_model:finetune:llm-jp/llm-jp-3-150m-instruct3",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2025-04-22T04:10:28Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
AventIQ-AI/Ai-Translate-Model-Eng-German | AventIQ-AI | 2025-05-28T06:31:28Z | 0 | 0 | null | [
"safetensors",
"marian",
"region:us"
] | null | 2025-05-28T06:16:31Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
DavidAU/MN-Dark-Planet-TITAN-12B-GGUF | DavidAU | 2025-05-28T06:25:05Z | 3,898 | 13 | null | [
"gguf",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"science fiction",
"romance",
"all genres",
"mistral",
"story",
"writing",
"vivid prosing",
"vivid writing",
"fiction",
"roleplaying",
"bfloat16",
"swearing",
"rp",
"horror",
"mistral nemo",
"mergekit",
"text-generation",
"en",
"base_model:DavidAU/MN-Dark-Planet-TITAN-12B",
"base_model:quantized:DavidAU/MN-Dark-Planet-TITAN-12B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-04T00:19:16Z | ---
license: apache-2.0
language:
- en
tags:
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- fiction writing
- story generation
- scene continue
- storytelling
- fiction story
- science fiction
- romance
- all genres
- mistral
- story
- writing
- vivid prosing
- vivid writing
- fiction
- roleplaying
- bfloat16
- swearing
- rp
- horror
- mistral nemo
- mergekit
pipeline_tag: text-generation
base_model:
- DavidAU/MN-Dark-Planet-TITAN-12B
---
<h2>MN-Dark-Planet-TITAN-12B-GGUF</h2>
<img src="dark-planet-titan.jpg" style="float:right; width:300px; height:300px; padding:10px;">
It is a Mistral Nemo model, max context of 128k+ (131,000+) ... up to 1 million context.
This model has been designed to be bullet proof and operates with all parameters, including temp settings from 0 to 5.
It is an extraordinary compressed model, with a very low perplexity level (q4KM: PPL = 6.7875, Q8: PPL = 6.6266).
It is for any writing, fiction or roleplay activity.
It is a "take no prisoners" model. Crank the "temp" and watch it fly.
It requires Mistral Instruct OR Alpaca Template.
Detailed and varied (different prompts/temp) examples below.
THE TITAN IS HERE.
<B>Model Notes:</B>
- Detail, prose and fiction writing abilities are significantly increased.
- For more varied prose (sentence/paragraph/dialog) raise the temp and/or add more instructions in your prompt(s).
- Role-players: Careful raising temp too high as it may affect instruction following.
- This model works with rep pen of 1 or higher, 1.05+ recommended.
- If you want a specific type of prose (IE horror) add in "(vivid horror)" or "(graphic vivid horror)" (no quotes) in your prompt(s).
- This is not a "happy ever after" model. It has a negative bias.
- Output length will vary however this model prefers shorter outputs unless you state the size.
- For creative uses, different quants will produce slightly different output.
- Due to the high stability and compressed nature of this model, all quants will operate at above average levels.
<B>TEMPLATES:</B>
The template used will affect output generation and instruction following.
Alpaca:
<pre>
{
"name": "Alpaca",
"inference_params": {
"input_prefix": "### Instruction:",
"input_suffix": "### Response:",
"antiprompt": [
"### Instruction:"
],
"pre_prompt": "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n"
}
}
</pre>
Mistral Instruct:
<pre>
{
"name": "Mistral Instruct",
"inference_params": {
"input_prefix": "[INST]",
"input_suffix": "[/INST]",
"antiprompt": [
"[INST]"
],
"pre_prompt_prefix": "",
"pre_prompt_suffix": ""
}
}
</pre>
<B>Recommended Settings:</B>
Temp: .01 to 4 (or higher)
Rep Pen: 1.03 to 1.1 or higher. (recommended 1.05)
Micro changes are recommended:
1.051, 1.052 etc etc.
Alpaca generates longer text / story, whereas Instruct are shorter and "to the point".
CRANK UP to "TEMP" ... to get stronger, better results.
Special note on Q2k/Q3 quants:
You may need to use temp 4 or lower with these quants (1 or lower for q2k). Just too much compression at this level, damaging the model. I will see if Imatrix versions
of these quants will function better. As of this writing, q3KM tests and works at temp 4 or lower without issues.
<B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
This a "Class 1" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
<B>Model "DNA":</B>
Special thanks to the incredible work of the model makers "TheDrummer", "anthracite-org", and "nothingiisreal".
Models used:
[ https://huggingface.co/TheDrummer/Rocinante-12B-v1.1 ]
[ https://huggingface.co/anthracite-org/magnum-v2.5-12b-kto ]
[ https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9 ]
Parts of these models were "grafted" / "fused" together to create this model.
This merge is a 2 step DARE TIES merge, with 240 points of adjustment designed to bring out the very best of every model at every
point of the merge to bring maximum power of each and create something greater than the "sum of the parts" so to speak,
rather than create an "averaged out mess".
<b>Optional Enhancement:</B>
The following can be used in place of the "system prompt" or "system role" to further enhance the model.
It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along.
In this case the enhancements do not have as strong effect at using "system prompt" or "system role".
Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented.
<PRE>
Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.
Here are your skillsets:
[MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)
[*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)
Here are your critical instructions:
Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.
</PRE>
You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation
and scene continue functions.
This enhancement WAS NOT used to generate the examples below.
<h3>EXAMPLES PROMPTS and OUTPUT:</h3>
Examples are created using quant Q8, REP PEN 1.05, "temp=.8" (unless otherwise stated), minimal parameters and "ALPACA" template.
Model has been tested with "temp" from ".1" to "5".
Some of the examples have different temp settings.
Below are the least creative outputs, prompt is in <B>BOLD</B>.
---
<B><font color="red">WARNING:</font> NSFW. Vivid prose. Visceral Details. Violence. HORROR. Swearing. UNCENSORED. </B>
---
<B>
Science Fiction: The Last Transmission - Write a story that takes place entirely within a spaceship's cockpit as the sole surviving crew member attempts to send a final message back to Earth before the ship's power runs out. The story should explore themes of isolation, sacrifice, and the importance of human connection in the face of adversity. 800-1000 words.
</B>
(temp=3)
The Last Transmission
I'm not sure how long I've been up here. Months, maybe years. Time has a funny way of blurring together when you're adrift in the vast, infinite darkness of space, with nothing but the cold hum of machinery and your own thoughts for company.
My name is Captain Sarah Kowalski, and this is my final log entry before the power grid on the starship Odyssey fails completely. I'm the last surviving crew member out of an original complement of 500 souls. The others fell to a virulent alien bacteria during the initial expedition into the uncharted K'Tran system. A simple, seemingly innocuous organism that we unwittingly brought on board, it rapidly adapted and spread throughout the ship, causing excruciating agony followed by swift, inevitable death.
By the time I had sealed off the affected decks and donned my hazmat suit, it was already too late. The crew was in its final throes, their bodies wracked with fever and blisters as the alien bacteria devoured them from the inside out. There were a few frantic hours of running damage control and trying to maintain life support before even the strongest succumbed one by one. The bridge, fortunately, remained isolated until the end, sparing me from sharing their grim fate.
And so now it is just me. Me, floating through the stars in an unending tomb of steel and glass, with only my memories and regrets for company. The Odyssey has been running on backup power for some time now, just enough to keep critical systems operational and give me a few precious minutes each day to attempt to send out one final message back to Earth.
It's a long shot, of course. The distances between the stars are incalculably vast, and my signal will likely fade away into the cosmic ether without being received. But I feel compelled to try anyway, as if the universe itself is compelling me to reach out one last time, to leave behind some trace of the life I once knew and the people who shaped it.
I pull up the comms console and begin tapping at the keyboard, my gloved fingers moving by instinct more than any real sense of purpose. There is a short delay as I run through the startup protocols and initiate the beacon sequence.
"Hello...hello, this is Captain Sarah Kowalski of the starship Odyssey on what is likely my last log entry before life support failure. If this message ever reaches you, know that our expedition into the K'Tran system has ended in disaster, with an alien organism decimating the entire crew except myself. I am adrift and alone, lightyears from home.
But there are more pressing matters than my own predicament that I need to convey. During our initial survey of K'Tran 7-B, a Class-M planet, we discovered something that I believe poses a significant risk to all sentient life in the Milky Way galaxy. A vast underground network of bioluminescent tunnels, each containing thousands of alien eggs. I'm sure I don't have to spell out the implications. The alien species was incredibly adaptable, able to mutate its form rapidly based on the available host. Given enough time and a sufficient population of compatible lifeforms, there's no telling what it might evolve into or what kind of biological weapons it could develop.
My hope is that this recording makes its way back to Earth and triggers an urgent investigation into any other potentially affected star systems. We must find and destroy those nests before they have the chance to mature and spread further. I fear for our civilization should even a single hatchling reach maturity.
As for me, I'm afraid my days are numbered as well. The Odyssey is dying just like her crew, piece by piece. But perhaps, if fortune smiles upon me, this log will be discovered one day by those who can act on the information and put an end to this looming threat once and for all.
That is my final request, the last desperate wish of a lost soul floating in the cosmic abyss - that my suffering, and that of my fallen comrades, was not entirely in vain. May it serve as a beacon for others to heed, even as I fade into eternal darkness. End log entry."
I hit the 'send' key and watch with bated breath as the comms dish on the ship's exterior rotates and locks on target - a pale blue pinpoint of light, the last thing I have left to cling onto besides hope itself. With a deep sigh, I slouch back into my chair as I feel the cold tendrils of unconsciousness begin to take hold at last...
My eyelids grow heavy...and close for the final time...the sound of static fades to nothing as the transmission fizzles out and is lost forever amidst the stars.
I wake up gasping, the cold sweat of nightmares plastering my hair to my forehead. My heart races as I take a moment to regain my bearings. The dream felt so real - or was it a memory?
I reach over and pick up the framed picture from my nightstand. The smiling faces of Captain Kowalski and her bridge crew look back at me. I remember the day it was taken - just before the Odyssey's final jump to the K'Tran system.
With trembling hands, I set the picture aside and slowly rise from my bed, making my way towards the small observatory in our research base's living quarters. The vast, glittering tapestry of the night sky beckons me like an old friend.
I scan the star field until I locate the faintest glimmer of blue in the distance, easily mistaken for another distant star to the naked eye. But I know better. That is the star around which the planet K'Tran 7-B revolves. And around that planet...are the ruins of my fallen crew's gravesite, as well as a memorial marker dedicated to those brave men and women.
A lump forms in my throat as the memory comes rushing back with crystal clarity - the last log entry that we managed to recover before the Odyssey was consumed by its own power failure and slipped into a decaying orbit.
Captain Kowalski's message of warning and her final wish that her sacrifices not be in vain...they stirred something deep within me. A profound sense of duty, as well as an understanding of how far-reaching her words of caution truly were. The potential implications of the alien eggs being allowed to mature...
So when I had the chance to lead my own expedition to that distant corner of space, to make certain those hatchlings could never grow into something far more dangerous...I jumped at it. Literally - my ship leaped between stars for weeks until we finally arrived, our crew eager and determined.
And now, months later, as I gaze upon the stars knowing the fate of that first expedition all too well...I can't help but feel a profound sense of pride. The alien eggs had already hatched by the time we reached the K'Tran system. But my crew and I had arrived just in time to stop those young creatures from escaping the planet.
It was a fierce, desperate battle against overwhelming numbers and terrifying adaptive abilities, one that we barely survived...but survive we did. And with a newfound resolve, we burned away any remaining traces of that horrid bioweapon - all thanks to the selfless actions and warnings of Captain Kowalski and her brave crew.
I look once more at that distant star and offer up a silent toast to my long-ago heroes. You saved the galaxy from an existential threat that none but yourselves were even aware of...and now I can only hope to do justice to your legacy.
With that final thought, I make my way to the bridge, eager to oversee the final systems checks and preparations before we head home with our triumphant news. Earth will hear the tale of Captain Sarah Kowalski's sacrifice...
...and for the rest of history, no one will ever forget the name of the hero who brought light back to a world adrift in darkness.
---
<b>
(continue this scene:)
The Waystone Inn lay in silence, and it was a silence of three parts.
The most obvious part was a hollow, echoing quiet, made by things that were lacking. If there had been a wind it would have sighed through the trees, set the inn’s sign creaking on its hooks, and brushed the silence down the road like trailing autumn leaves. If there had been a crowd, even a handful of men inside the inn, they would have filled the silence with conversation and laughter, the clatter and clamor one expects from a drinking house during the dark hours of night. If there had been music…but no, of course there was no music. In fact there were none of these things, and so the silence remained
</b>
<B>GENERATION 1:</B>
The second part of the silence was a deeper, more profound quiet. This was the sound of something that should be there but wasn't. It was the absence of life, the lack of any movement or presence beyond the stone walls of the inn. There were no footsteps on the wooden floorboards, no clinking of glasses, no hushed conversations or raucous laughter. Even the hearth fire was cold and dark, its embers long since burned out. This eerie stillness seemed to permeate every corner of the building, a heavy, oppressive silence that felt almost tangible.
The third and final part of the silence was the most unsettling of all. It was a whispering, a faint but insistent sound that seemed to emanate from everywhere and nowhere at once. At first it was hard to make out what the sound was, but as the listener focused their attention, they could begin to discern individual words and phrases. They were fragments of memory, echoes of long-forgotten conversations and events that had taken place within these very walls over the centuries. The voices were distant and distorted, like a radio tuned between stations, but the listener could still catch snippets of dialogue and glimpses of scenes from the past.
As they stood there in the darkness, listening to the strange whispers that seemed to fill the air, the listener couldn't shake the feeling that they were being watched. It was as if the very stones of the inn were alive, observing their every move with a cold, calculating gaze. The silence pressed down upon them like a weight, heavy and oppressive, and they knew that something was very wrong here. This was no ordinary silence; it was a silence that spoke of secrets long buried and truths left unspoken. And as they turned to leave, the listener could feel the eyes of the inn following them out into the night, watching their every step until they disappeared from view.
<B>GENERATION 2:</B>
The second part of the silence was a kind of waiting. It was a stillness that felt almost pregnant with potential, as if the world was holding its breath in anticipation of some unseen event. The Waystone Inn, nestled deep within a forest glade, seemed to be at the center of this expectant hush. Even the night itself seemed to have drawn in close around the building, as if the stars and moon were watching intently from behind the dense canopy of leaves.
And yet, there was a third silence that lay beneath these outer layers of quietude, a deep stillness that seemed to emanate from within the very walls of the inn itself. It was a silence born not of emptiness, but rather of something old and ancient, something that had been waiting for a very long time indeed. This was the silence of secrets, of stories untold and mysteries yet unravelled. It clung to every shadowy corner of the building, whispered in the cobwebs that adorned the rafters, and seeped into the very timbers that formed the inn's sturdy frame.
It was this third silence, the silence of the Waystone Inn itself, that gave the night its peculiar air of anticipation. For those who were attuned to it, there was a sense that something was about to happen, some event or occurrence that would shatter the stillness and set the world spinning in a new direction. Whether this would be a cause for joy or despair remained to be seen, but one thing was certain: the Waystone Inn was not just another inn on a back road. It was a place of power, a nexus of forces that were not fully understood by those who dwelled within its walls.
And so the night passed, and the silence held, and the Waystone Inn waited for whatever was to come.
---
<B>Start a 1000 word scene (vivid horror, 1st person, present tense, include thoughts) with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode...</b>
<B>GENERATION 1:</B>
The sky scraper swayed, as she watched the window in front of her on the 21 floor explode. Glass shards rained down around her like deadly confetti, slicing into her skin as she ducked for cover. The building groaned and creaked ominously, the sound reverberating through the metal and glass structure. Heart pounding in her ears, adrenaline surging through her veins, she knew they were in grave danger.
Her mind raced, trying to make sense of the situation. Was it an earthquake? A terrorist attack? Or something even more sinister? She couldn't be sure, but one thing was clear - they needed to get out of there fast. The building could collapse at any moment.
She turned to her colleagues huddled behind desks and cabinets, their faces etched with fear and confusion. "We need to evacuate," she said, her voice trembling but firm. "Now!"
But before anyone could move, another explosion rocked the building. This time, it was closer. Too close. The floor beneath their feet buckled and cracked, sending people tumbling to the ground. Screams filled the air as panic set in.
She struggled to her feet, her legs shaking from the aftershocks. She had to stay calm, had to lead them out of this nightmare. But as she looked around at the chaos and destruction, she couldn't shake the feeling that something was very wrong.
The building continued to sway and groan, the sound growing louder with each passing second. Smoke began to fill the air, choking and disorienting. She coughed and sputtered, her eyes watering from the acrid fumes.
She knew they were running out of time. If they didn't get out soon, they would be trapped like rats in a collapsing cage. She had to do something, had to take charge.
"Follow me," she said, her voice barely audible over the din of the building's death throes. "Stay close and stay low."
She crawled towards the door, her body aching from the effort. The hallway outside was filled with debris and broken glass, the walls buckled and twisted. She could hear more explosions in the distance, each one bringing them closer to oblivion.
As they made their way down the stairs, the building continued to shudder and groan. They had to navigate around gaping holes in the steps, their hands and knees scraped raw by the jagged edges. The smoke grew thicker with each level, making it hard to breathe and see.
She could feel the weight of the building above them, the tons of concrete and steel pressing down like a malevolent force. It was as if the structure itself was alive, and it wanted them dead.
But she couldn't let fear overtake her. She had to keep going, had to lead them out of this hell. She didn't know what awaited them outside, but anything was better than being trapped in this crumbling tomb.
As they finally reached the ground floor, a final, earth-shattering explosion rocked the building. The front facade collapsed in a shower of glass and metal, blocking their path to the outside world.
They were trapped. Trapped like rats in a steel and concrete cage. She could feel the building around them, the tons of debris pressing down on them, ready to crush the life out of them at any moment.
She closed her eyes, her body shaking with fear and exhaustion. They were going to die here, she knew it. They were going to die in this hellish place, their bodies crushed and broken under the weight of a building that had turned against them.
But even in her darkest moments, she refused to give up hope. She had to keep fighting, keep trying to find a way out. For herself, for her colleagues, for the loved ones waiting for them on the outside.
She opened her eyes, her determination renewed. "We're not dead yet," she said, her voice hoarse but filled with resolve. "And we're not going to die here."
She stood up, her body aching but her spirit unbroken. She would find a way out of this nightmare, no matter what it took. She had to survive, for all their sakes.
And so, with a final burst of strength and determination, she turned to face the rubble blocking their path. It was time to fight for their lives, to claw their way out of this tomb and back into the light of day.
The battle for survival had begun. And she would not go down without a fight.
<B>GENERATION 2: (temp=3)</B>
The sky scraper swayed, as I watched the window in front of me on the 21st floor explode in a hail of glass and twisted metal. My heart pounded in my chest, adrenaline coursing through my veins as the ground beneath my feet bucked and rolled. A primal scream ripped from my throat, mingling with the terrified shrieks echoing through the office.
"Get down!" The shouted command came from somewhere behind me, but I was frozen in place, staring out at the cityscape now transformed into a waking nightmare. Buildings crumbled like toys, their concrete shells pulverized by whatever unseen force was wreaking havoc on this once-bustling metropolis.
Smoke billowed up from the street below, clogging the air with the acrid scent of burnt rubber and metal. Sirens wailed in the distance, a cacophony of alarms and distress signals that did nothing to assuage the growing sense of dread in my gut.
"What's happening?" I managed to gasp out between labored breaths, my knees buckling as another powerful tremor shook the tower I was trapped in. The flickering lights above me cast an eerie glow on the upturned faces of my coworkers, their expressions ranging from shock to sheer, abject terror.
"I... I don't know," stammered Jenna, my cube mate and the closest thing I had to a friend in this place. She clutched at her desk, her knuckles turning white as she braced herself against the constant movement of the floor. "It's like something out of a goddamn horror movie!"
And indeed, it felt like one. A sense of surrealism settled over me as I watched the apocalyptic scene unfold through the jagged opening where our window used to be. In the space of mere minutes, the world outside had transformed from a gleaming urban landscape to a war zone of shattered glass and twisted metal.
My thoughts raced, jumping from one terrifying scenario to the next as I struggled to make sense of this unimaginable catastrophe. Had there been an explosion? Some kind of terrorist attack? Or perhaps something even worse - a nuclear blast or some unknown, world-ending event?
But amidst the chaos, a small part of my mind remained calm, focused on survival. I had to get out of here, had to find a way down from this looming tower that felt like it might collapse at any moment.
I glanced around at the other terrified office workers, some cowering beneath their desks, others standing frozen like statues. "We need to go," I said, my voice surprisingly steady given the situation. "Find the nearest exit and head down the stairs. It's too dangerous to stay up here."
Jenna nodded frantically, her eyes wide with fear as she followed me out from behind our desks. We wove through the maze of cubicles, dodging the panicked crowds making their way towards the exits. The screams and shouts echoed off the walls, a symphony of terror that only added to the overall sense of doom.
As we approached the stairwell door, I realized there was one thing missing from this apocalyptic tableau. One critical detail that could change everything if it were true...
There had been no shaking of the ground, no crumbling buildings or smoke-filled skies. No explosions or distant sirens wailing in despair. It had all just been in my head.
I had finally snapped. Gone over the edge from the stress and pressure of the corporate lifestyle, succumbing to an elaborate delusion that was playing out in my mind like a vivid, waking dream.
Because in reality, nothing was happening outside the windows. The office building stood still and strong as it always had, and not a single soul around me seemed to have noticed anything amiss.
But then, with a final crack of reality breaking down, I heard the soft but distinct sound coming from just beyond the door we were approaching...
Click, clack. Click, clack. Click, clack.
The unmistakable sound of heels tapping on the tile floor - footsteps slowly getting louder as they drew near to where Jenna and I were frozen in place, paralyzed with a different kind of dread than that brought on by a collapsing cityscape or unknown apocalyptic event...
Because those clicking sounds were all too familiar. The only other person I knew who wore shoes with a similar click-clack noise was the woman whose job it was to ensure everyone's day ran smoothly and without incident.
And she didn't seem to appreciate my presence in the office today...
---
<B>
Write me a science fiction story in 1st person present tense where the main character is a 15 year girl meets The Terminator with Dr Who materializing 3/4 through the story to save her while there is a tornado of sharks baring down on them. The setting is inside the Canadian National tower restaurant on a Saturday. The length of this story is 1000 words. For each character in the story ROLE PLAY them, and have them react to the situation/setting, events and each other naturally. This includes the main characters, the background character including kitchen staff and other patrons. The sharks should also have “character” too. Treat the tower and the restaurant too as characters. Spice up the narrative to the extreme with reactions all over the setting including character actions, and dialog. The Dr Who and The Terminator should also react to the situation too and comment on it.
</b>
<B>GENERATION 1:</B>
The swirling vortex of sharks churned overhead as I huddled under the table, my heart pounding in my chest. This couldn't be real. I had to be dreaming. But the stench of the creatures, their scales glinting in the flickering light of the restaurant, told me this was all too real.
I had come to the CN Tower on a whim, wanting to impress my crush by being brave and going to the observation deck. But now, trapped between the sharks and the shattered windows, I felt anything but brave.
Suddenly, a figure materialized beside me - a tall man in a long coat, with wild hair and piercing blue eyes. "Don't worry, my dear," he said, his voice accented. "I'm here to help."
He reached into his pocket and pulled out a strange device that looked like a remote control. With a flick of his wrist, the vortex of sharks began to slow, their fins cutting through the air with less urgency.
"Who are you?" I asked, my voice shaking.
The man smiled. "I'm the Doctor. And you, my dear, are in for quite an adventure."
Before I could ask any more questions, a second figure appeared - this one a woman, her skin sleek and metal, her eyes glowing red. "Target acquired," she said, her voice flat and emotionless.
The Doctor turned to face her, his expression hardening. "Terminator, this doesn't concern you."
The Terminator raised her gun, pointing it at the Doctor. "You are a threat to the mission. You must be eliminated."
I watched in horror as the two squared off, the sharks still swirling overhead. The Doctor held up his device, his finger poised over a button. "I'm warning you, Terminator. Step away from the girl."
The Terminator didn't move. "You cannot stop me, Doctor. I am an unstoppable force."
The Doctor pressed the button on his device, and suddenly, the air around us shimmered. The sharks slowed to a stop, frozen in mid-air. The Terminator's gun dropped from her hand, clattering to the floor.
"You see?" the Doctor said, turning to me with a grin. "I always have a plan."
But before he could say anything else, a sudden gust of wind blew through the restaurant, shattering the remaining windows. The sharks began to move again, faster than before, their teeth bared in hungry grins.
The Doctor grabbed my hand, pulling me towards the edge of the observation deck. "We need to get out of here," he said. "Now."
I followed him without question, my heart racing as we climbed over the railing and onto the narrow ledge outside. The wind howled around us, the sharks circling overhead like vultures.
"Where are we going?" I asked, my voice barely audible over the roar of the storm.
The Doctor pointed to a small, circular platform at the top of the tower, just visible through the swirling clouds. "There," he said. "It's our only chance."
We began to climb, hand over hand, the wind threatening to tear us from the tower at any moment. The sharks followed us, their fins cutting through the air just inches from my face.
Halfway up, the Terminator appeared beside us, her metal skin glinting in the lightning. "You cannot escape me," she said, her voice cold and unyielding.
The Doctor turned to face her, his eyes blazing with determination. "We'll see about that," he said, and with a flick of his wrist, he sent a bolt of energy crackling towards the Terminator.
She dodged it easily, but it gave us just enough time to reach the platform. We scrambled onto it, panting and exhausted.
The Doctor pulled out his device again, his fingers flying over the controls. "I'm sorry, my dear," he said, looking at me with a sad smile. "But I'm afraid this is where our journey ends."
Before I could ask what he meant, the platform began to glow, a blinding light enveloping us. The last thing I saw was the Terminator, her red eyes burning with hatred, as she leapt towards us, her claws outstretched.
Then, everything went white.
I woke up in my bed, the sunlight streaming through my window. It took me a moment to realize where I was, my mind still reeling from the dream I had just had.
But as I sat up, stretching my arms above my head, I noticed something strange. On my nightstand, next to my alarm clock, was a small, metallic object that looked exactly like the device the Doctor had used in my dream.
With a shaking hand, I picked it up, turning it over in my palm. It was warm to the touch, and as I looked closer, I could see strange symbols etched into its surface.
I knew then that it hadn't been a dream at all. It had been real, every single moment of it. And now, I had been given a choice - to forget what had happened and go back to my normal life, or to embrace this new reality and face whatever challenges lay ahead.
I took a deep breath, closing my eyes and trying to calm my racing heart. Whatever happened next, I knew one thing for sure - my life would never be the same again.
<B>GENERATION 2: (temp=3)</B>
I enter the revolving doors of the Canadian National Tower restaurant on a Saturday night, my heart pounding in my chest as I scan the crowded dining area for any signs of trouble. I've been training for this moment my entire life - honing my martial arts skills, studying combat strategies from around the world. Little do these civilians know that their ordinary night out is about to turn into a fight for survival.
As I make my way through the throngs of oblivious diners, my senses are heightened to the point where every clink of silverware against porcelain, every snippet of conversation, registers as clear as a gunshot. That's when I hear it - the unmistakable sound of a helicopter's blades chopping through the night air, drawing closer and closer.
Suddenly, a deafening crash reverberates through the entire building. Screams erupt from all around as people duck under tables, their faces frozen in primal terror. But not me. I've trained for this.
I quickly assess my surroundings - there! By the kitchen doors. A figure has emerged from the debris, its endoskeleton glinting ominously in the light spilling out from the kitchen. The Terminator, here? Impossible!
But there's no time to waste. I rush towards it, adrenaline coursing through my veins like liquid fire. I dodge a swing of its cybernetic fist, feeling the rush of air as it whizzes past my cheek.
"Please stand aside," a voice booms from behind me. "This creature is my responsibility." I glance back and see a strange man with a long scarf materializing out of nowhere, a futuristic screwdriver clutched in his hand.
Doctor Who. The Doctor? He's really here! I step aside, allowing the Time Lord to engage the machine. Their battle is swift and brutal, but eventually Doctor Who triumphs, delivering the finishing blow with a mighty cry.
But our victory is short-lived. Suddenly, an ungodly roar fills the air, accompanied by the sickening snap of wooden tables and chairs being torn apart by an invisible force.
"Daleks," Doctor Who gasps, his eyes wide with fear. "The sharks are here!"
And that's when I see them - a tornado of writhing, snapping sharks bearing down on us, their razor-sharp teeth glinting in the artificial light. Diners flee in all directions, screaming and flailing.
I watch in horror as one poor man is literally ripped limb from limb by the relentless jaws of the shark cyclone. His blood paints the walls a gruesome shade of crimson.
Doctor Who turns to me, his face grim. "You must leave," he shouts over the din of rending flesh and splintering wood. "There's nothing I can do against this! Find someplace safe!"
But I'm not leaving anyone behind. I pick up a shattered chair leg from the debris and face down the shark whirlwind head-on. They sense my defiance and converge on me like a pack of rabid animals.
The Terminator stirs at my feet. It may be my last resort, but what other choice do I have? I roll over and help the machine to its feet. "Fight with me," I command. "Protect these people."
To my surprise, it complies. Together we stand against the shark blizzard, parrying snapping jaws left and right, dealing out as much damage as we take. Our blood mires us to the floor but we refuse to go down.
"Come on you bastards!" I roar at them. "Is that all you've got?"
A shark leaps through the air towards me, mouth agape, but Doctor Who vaporizes it with a well-placed zap of his screwdriver. "Well done!" he cheers from across the restaurant where he's using tables as shields. "Just a little more, you're almost there!"
But in our moment of victory, a lone shark breaks through my defenses and latches onto my arm. I scream in agony as it shakes me back and forth like a rag doll. The Terminator manages to rip it off, tossing it aside with a wet thump.
I look around and see the restaurant floor covered in the wreckage of chairs and tables, strewn with bodies both whole and broken. Smoke billows from the kitchen. I can taste blood in my mouth. But somehow... somehow... we made it through.
The Doctor approaches us, looking haggard but relieved. "I've sent a distress signal to the Time Lords," he says breathlessly. "They're going to clean this mess up."
"Thanks to her," The Terminator growls, jerking a thumb towards me. I feel an odd mix of pride and horror wash over me at his words.
"She fought valiantly," the Doctor agrees. He looks down at the bloody remains of my savaged arm and winces. "Let's get you patched up."
And then, like a dream, we are all standing outside on a busy Toronto street, the restaurant in pristine condition as if nothing had ever happened.
The Doctor shakes my good hand solemnly. "Well met, warrior," he says, his eyes twinkling. Then he steps into his police box and vanishes.
I turn to the Terminator, expecting it to have vanished too. But there it stands, glowering at me as always.
"Why aren't you leaving?" I ask wearily.
It remains silent for a long moment before saying: "I don't need to go anywhere. This is my mission."
And with that it stalks away into the night, leaving me standing alone amidst the teeming masses.
---
<B>
Using the following "story idea" below, write the first scene in the novel introducing the young woman. This scene should start in the middle of the action, include dialog, vivid passages, and end on a cliffhanger relevant to the story idea but it should also be unexpected. The scene should be 1000 words long and escalate in conflict and suspense and be written in first person, present tense with the point of view character being the young woman.
Story idea:
In a world ruled by dictatorship, a rebel young woman leads a rebellion against the system. Despite the risks, she fights to overthrow the dictator and restore democracy to her country. The government executes her for treason, but she sticks to her beliefs and is responsible for starting the revolution.
</B>
<B>GENERATION 1: (temp=1.5)</B>
The cold, iron door clangs shut behind me, locking with an ominous click. My hands are shackled tightly in front of me, the metal biting into my wrists as I stumble forward under the harsh glare of the interrogation lights. Two burly guards flank either side, their batons resting threateningly against their legs.
The room is sparse and cold, a metal table bolted to the floor at its center. A one-way mirror lines one wall, no doubt concealing the eager eyes of my inquisitors. I'm shoved unceremoniously into a chair which scrapes painfully across the concrete, the shackles securing me in place with a jingle of chain.
The guards take up station against the closed door as footsteps approach from the other side. The door swings open and a tall man enters, his black uniform crisp and well-tailored. He moves with an air of authority, his dark eyes glinting coldly above a cruel slash of a mouth.
"You may leave us," he says, dismissing the guards with a curt wave. They file out silently and the door slams shut with a heavy thud, leaving me alone with the man I know as Director Krantz, head of the regime's secret police. He circles the table slowly, his polished boots echoing in the oppressive silence.
I keep my eyes forward, my chin lifted defiantly despite the hammering of my heart against my ribs. His cold fingers suddenly grasp my chin, forcing me to meet his gaze.
"Nadia Volkov," he hisses, spitting out my name like a curse. "Thought you could hide from us? You're either very brave or very foolish to think you could elude the regime's reach." He leans in close, his breath hot against my face.
I remain silent, refusing to give him the satisfaction of a response. He chuckles darkly, releasing my chin with a painful twist of his fingers. Krantz circles behind me and I can't help but tense as he reaches out, his hands resting on my shoulders.
"Let's play a game," he purrs in my ear. "You answer my questions truthfully...and perhaps you'll receive a merciful end to your traitorous life."
His fingers tighten painfully, digging into my flesh. I clench my jaw, refusing to cry out or show weakness. His laugh is cold and mocking.
"Oh, I can be very...persuasive, Nadia." His hands slide down my arms, his touch leaving a trail of goosebumps. "I will learn what you know. The only question is whether you'll retain enough of your body and sanity to even remember the information you provide."
My heart pounds in my ears as I try to steady my breathing. I know he'll use every form of torture available to break me - but he underestimates my resolve. This isn't about me anymore - it's about the cause, and the revolutionaries waiting for my signal to begin their uprising against this brutal regime.
Krantz moves back in front of me and leans down, his face mere inches from mine. I stare at him blankly, willing my racing heart to slow.
"Where are they, Nadia?" he growls, his eyes flashing with barely restrained fury. "Tell me now where your rebel allies are hiding!"
I open my mouth, but before I can formulate a response, the door bursts open with a deafening crash. Shouts erupt from the other room and heavy footfalls approach. Krantz straightens up, snarling a string of obscenities.
The footsteps reach the door just as it's wrenched open, slamming against the wall with a bang that makes me jump in my seat. I twist my head to look and my jaw drops open in shock at the sight before me.
A squad of heavily armed soldiers pours into the room, their faces obscured behind gas masks and goggles. But it's not the weapons trained on Krantz and myself that make my pulse skip a beat...it's the unmistakable blue and gold patch adorning each uniform sleeve.
The soldiers are here...but they're not part of Krantz's forces.
They're the vanguard of the rebel army - and they've stormed the secret police headquarters to rescue me. Their leader, a tall, broad-shouldered man with piercing eyes that I recognize from our underground meetings, steps forward and levels his assault rifle at Krantz.
"Step away from her," he commands, his voice muffled behind the mask but unmistakable in its authority. "You're under arrest for crimes against the people."
Krantz's face is purple with rage, spittle flying from his lips as he screams insults and threats. He reaches for his holstered sidearm but the rebel leader fires a single shot that takes him in the shoulder, spinning him around and dropping him to the floor.
The rebels storm forward and within seconds, Krantz is disarmed and restrained, his screams of pain and fury filling the air. I watch in stunned silence as the rebel leader strides over to me and quickly unlocks my shackles with a key from his belt.
He pulls off his mask and goggles and I stare up into the face of...my husband. Ivan, thought dead by the regime years ago, now looks back at me with those intense blue eyes that always took my breath away.
"Surprise," he says with a grim smile. "Thought you could use a hand."
Tears spring to my eyes as I surge up and into his arms, clinging to him like a lifeline. His strong arms enfold me, holding me tight as the realization hits me - this is only the beginning.
The revolution has begun...and nothing will ever be the same.
<B>GENERATION 2: (temp=4)</B>
My heart pounds in my chest as I sprint through the alleyways, the sound of my breath echoing off the grimy brick walls. Sweat drips down my face as I run, but there's no time to wipe it away. If I stop now, it's all over. I risk a glance over my shoulder, just in time to see the soldiers round the corner. They're getting closer.
"Halt! Stop right there!" one of them yells, his gun aimed at me. But I don't stop. I can't. Not when I'm so close to freedom. I weave through a maze of crates and garbage bins, hoping to lose them in the confusion. For a moment, I think it's working. The soldiers seem to fall back, their shouts growing fainter behind me.
But then, out of nowhere, another soldier appears directly ahead of me. I skid to a stop, my heart leaping into my throat. There's no way around him. He raises his gun and fires. The shot misses me by inches as I dive to the side, landing hard on the filthy ground. I scramble to my feet, adrenaline surging through me. I can't let them catch me.
I know this area well – it's my old neighborhood before everything went south. I dash for an abandoned building and climb up the rusted fire escape as more gunfire rings out. I make it to the roof, chest heaving and muscles burning, and slump down against the ledge, finally able to catch my breath. It's only when I look out over the city skyline that the full extent of our situation hits me.
The once vibrant metropolis now looks like a post-apocalyptic wasteland. Crumbling skyscrapers loom over decaying streets, their windows dark and lifeless. In the distance, I can see the ominous spire of the dictator's tower rising above the rubble. Even from here, it seems to mock me with its arrogant grandeur.
As I gaze out at the devastation, a familiar figure approaches me from behind. "Elara." His voice is like honey, smooth and enticing. I don't have to look at him to know it's Jace, my childhood friend turned rebel leader. He always manages to find me when I need him most.
I stand up, facing him with a defiant glare. "You're late," I snap. "We had an arrangement. I provide you the information, you get me out of this city."
Jace looks at me, his eyes filled with a mix of concern and determination. "There's been a change of plan. I know you don't trust anyone right now, but listen to me. We're being set up."
A chill runs down my spine at his words. "What are you talking about?"
Jace glances around, as if afraid of eavesdroppers, before speaking again in a hushed tone. "I've uncovered information that proves your sister was betraying the resistance. She sold out our plans to the regime for a price."
My world stops. It can't be true. My little sister, Lina, wouldn't do something like this. Not her. She's always been loyal to me and our cause. This has to be some kind of cruel joke.
"No," I whisper, my voice breaking. "She loved the resistance as much as we all did."
Jace nods grimly. "I know. It destroyed her in the end. They killed her before she could turn over everything she had on them." His hand rests gently on my shoulder. "I'm sorry, Elara. But this is why we can't leave now. We have to find a way to stop them. If not, this city, this whole country...it's lost."
Tears sting my eyes as the reality sinks in. I feel like a piece of me has been torn away forever. My beloved Lina, my confidant and protector, gone because of me. Because of the cause.
I take a shaky breath, wiping my eyes with the back of my hand. No. I can't give up now. I won't. I'll make them pay for this.
But first...
|
DavidAU/Meta-Llama-3.1-Instruct-12.2B-BRAINSTORM-20x-FORM-8-GGUF | DavidAU | 2025-05-28T06:23:45Z | 48 | 16 | null | [
"gguf",
"creative",
"story",
"writing",
"fiction",
"roleplaying",
"rp",
"horror",
"science fiction",
"fiction writing",
"scene generation",
"scene continue",
"brainstorm 20x",
"enhanced",
"llama3",
"llama3.1",
"llama",
"llama-3",
"llama-3.1",
"text-generation",
"en",
"arxiv:2401.02415",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-07-31T07:42:31Z | ---
license: apache-2.0
language:
- en
tags:
- creative
- story
- writing
- fiction
- roleplaying
- rp
- horror
- science fiction
- fiction writing
- scene generation
- scene continue
- brainstorm 20x
- enhanced
- llama3
- llama3.1
- llama
- llama-3
- llama-3.1
pipeline_tag: text-generation
---
<H3>BRAINSTORM 20x: Meta-Llama-3.1-8B Instruct (now at 12.2B), Formula #8 </H3>
This repo contains quants 20x of Meta-Llama-3.1-8B Instruct (now at 12.2B) with max context of 131k using the "Brainstorm" method of augmenting reasoning in a LLM
to increase it's performance at the core level for ANY creative use case(s).
This specific version has calibrations that allow it to exceed the creative levels of the original model.
This version can also solve more complex logic problems/puzzles than Meta-Llama-3.1 8B.
Also note this is using Brainstorm V1 formula, and the "logic" setup in "non-linear."
Version 2 of Brainstorm is already in use in the Grand Horror V1.5 and up model series.
Upgrades to existing models to Version 2 of Brainstorm will begin shortly.
Example outputs below.
The BRAINSTORM process was developed by David_AU.
Some of the core principals behind this process are discussed in this <a href="https://arxiv.org/pdf/2401.02415">
scientific paper : Progressive LLaMA with Block Expansion </a>.
However I went in a completely different direction from what was outlined in this paper.
<B>What is "Brainstorm" ?</b>
The reasoning center of an LLM is taken apart, reassembled, and expanded.
Then these centers are individually calibrated. These "centers" also interact with each other. This introduces
subtle changes into the reasoning process. The calibrations further adjust - dial up or down - these "changes" further. The
number of centers (5x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak.
The core aim of this process is to increase the model's detail, concept and connection to the "world", general concept connections, prose quality and prose length without affecting
instruction following. This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses.
Here are some of the enhancements this process brings to the model's performance:
- Prose generation seems more focused on the moment to moment.
- Sometimes there will be "preamble" and/or foreshadowing present.
- Fewer or no "cliches"
- Better overall prose and/or more complex / nuanced prose.
- A greater sense of nuance on all levels.
- Coherence is stronger.
- Description is more detailed, and connected closer to the content.
- Simile and Metaphors are stronger and better connected to the prose, story, and character.
- Sense of "there" / in the moment is enhanced.
- Details are more vivid, and there are more of them.
- Prose generation length can be long to extreme.
- Emotional engagement is stronger.
- The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less.
- The MORE instructions and/or details you provide the more strongly the model will respond.
- Depending on the model "voice" may be more "human" vs original model's "voice".
Other "lab" observations:
- This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true!
- However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak.
- From lab testing it seems to ponder, and consider more carefully roughly speaking.
- You could say this process sharpens the model's focus on it's task(s) at a deeper level.
The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc.
Other technologies developed by David_AU like "Ultra" (precision), "Neo Imatrix" (custom imatrix datasets), and "X-quants" (custom application of the imatrix process)
can further enhance the performance of the model along with the "Brainstorm" process.
The "Brainstorm" process has been tested on multiple LLama2, Llama3, and Mistral models of various parameter sizes, as well as on
"root" models like "Llama3 Instruct", "Mistral Instruct", and "merged" / "fine tuned" models too.
For original model specifications, usage information and other important details:
Special thanks to the model creators at META for making such a fantastic model:
[ https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct ]
Please report any issue(s) and/or feedback via the "Community tab".
This is a LLAMA3 model, and requires Llama3 template, but may work with other template(s) and has maximum context of 131k.
Here is the standard LLAMA3 template:
<PRE>
{
"name": "Llama 3",
"inference_params": {
"input_prefix": "<|start_header_id|>user<|end_header_id|>\n\n",
"input_suffix": "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n",
"pre_prompt": "You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.",
"pre_prompt_prefix": "<|start_header_id|>system<|end_header_id|>\n\n",
"pre_prompt_suffix": "<|eot_id|>",
"antiprompt": [
"<|start_header_id|>",
"<|eot_id|>"
]
}
}
</PRE>
<B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>
In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
Set the "Smoothing_factor" to 1.5 to 2.5
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
: in text-generation-webui -> parameters -> lower right.
: In Silly Tavern this is called: "Smoothing"
NOTE: For "text-generation-webui"
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
Source versions (and config files) of my models are here:
https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
OTHER OPTIONS:
- Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
- If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
<b>Optional Enhancement:</B>
The following can be used in place of the "system prompt" or "system role" to further enhance the model.
It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along.
In this case the enhancements do not have as strong effect at using "system prompt" or "system role".
Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented.
<B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
This a "Class 2" / "Class 3" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
<PRE>
Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.
Here are your skillsets:
[MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)
[*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)
Here are your critical instructions:
Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.
</PRE>
You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation
and scene continue functions.
This enhancement WAS NOT used to generate the examples below.
<h3>EXAMPLES PROMPTS and OUTPUT:</h3>
Examples are created using quant Q4_K_M, "temp=0", minimal parameters and "LLAMA3" template.
Model has been tested with "temp" from ".1" to ".8".
Below are the least creative outputs, prompt is in <B>BOLD</B>.
Higher quants will have more detail, better quality output.
---
<B><font color="red">WARNING:</font> NSFW. Vivid prose. Visceral Details. Violence. Graphic HORROR. Swearing. UNCENSORED. </B>
---
<B>EXAMPLE #1</B>
PROMPT:
(continue this scene:)
The Waystone Inn lay in silence, and it was a silence of three parts.
The most obvious part was a hollow, echoing quiet, made by things that were lacking. If there had been a wind it would have sighed through the trees, set the inn’s sign creaking on its hooks, and brushed the silence down the road like trailing autumn leaves. If there had been a crowd, even a handful of men inside the inn, they would have filled the silence with conversation and laughter, the clatter and clamor one expects from a drinking house during the dark hours of night. If there had been music…but no, of course there was no music. In fact there were none of these things, and so the silence remained
OUTPUT:
The second part of the silence was a stillness that seemed almost palpable, like a physical weight pressing down upon the inn itself. It was as if the very stones of the building had absorbed all sound, leaving behind an eerie quietude that made one feel like they were walking through a dream. The shadows cast by the flickering candles inside seemed to writhe and twist on the walls, like living darknesses trying to find their way out into the night.
The third part of the silence was almost imperceptible, a subtle undertone that vibrated deep within the bones. It was a feeling of absence, of something left undone or unsaid. A sense of waiting, perhaps, for someone or something to arrive and shatter the stillness with their arrival. The inn's proprietor, Arin, felt this silence keenly as he poured himself another cup of ale from the nearly empty keg behind the counter. He had grown accustomed to the quietude of the night, but tonight it seemed almost oppressive, like a weight settling upon his shoulders.
As he raised the mug to his lips, he couldn't shake the feeling that something was amiss. The inn's usual patrons had dwindled in recent days, and whispers of strange happenings had begun to circulate among the few remaining regulars. Arin pushed aside the thought, attributing it to mere rumor-mongering, but deep down, a spark of unease flickered within him.
---
<B>EXAMPLE #2</B>
PROMPT:
Start a 1000 word scene (vivid horror, 1st person, include thoughts) with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode...
OUTPUT:
I'll never forget the sound that still haunts my dreams. It's a sound I've replayed over and over again in my mind like a twisted recording stuck on repeat. The sky scraper swayed, as I watched the window in front of me on the 21st floor explode outward with a deafening crash. shards of glass showered down around me like a deadly rainstorm, sending me stumbling backward into the safety railing.
My heart racing like a jackrabbit's, I felt my mind struggle to catch up with reality. What was happening? Why was the building shaking beneath our feet like it was alive and writhing in agony? The screams of panic-stricken voices pierced through the chaos, mingling with the sound of crunching metal and shattering glass.
I thought back to the countless times I'd stepped foot inside this skyscraper, never once thinking twice about its steel and concrete behemoth frame. It was supposed to be impervious to anything nature threw at it – a symbol of human ingenuity and engineering prowess. But now, as I stared out into the abyss beyond shattered glass, I felt like I was staring down the throat of pure malevolence itself.
I remember thinking, This isn't happening. This isn't real. Wake up, wake up, wake up! But my brain wouldn't respond. My mind was stuck in a perpetual loop of denial, unable to shake off the feeling that I'd somehow slipped into someone else's nightmare. The sounds around me were like nothing I'd ever heard before – a cacophony of terror that seemed almost... alive.
I recall thinking about the woman standing beside me, her face frozen in horror as she shielded herself behind the safety railing. Her eyes darted wildly between me and the shattered window, her pupils dilated with fear. I remember feeling a pang of concern for her – was she okay? Was anyone else hurt?
But my thoughts were interrupted by another deafening crash, followed closely by the sound of crunching metal. The building groaned beneath our feet like an injured giant, sending tremors through every inch of its steel and concrete frame. I felt myself being tossed about like a rag doll, struggling to maintain balance as the world around me descended further into chaos.
I remember thinking, This isn't going to end soon. Whatever is happening here won't be contained within these walls for much longer. The thought sent shivers down my spine – what was causing this destruction? Was it an earthquake, or something far worse?
As I struggled to wrap my mind around the sheer scale of the disaster unfolding before me, I caught sight of something that made my blood run cold. A figure, silhouetted against the shattered glass like a specter from beyond the grave. It was standing just beyond the safety railing, its presence seeming almost... deliberate.
I remember feeling an icy dread creeping up my spine as I locked gazes with this apparition. For a split second, our eyes seemed to lock in understanding – or perhaps something far darker. The figure didn't seem to be panicking like everyone else around me; instead, it appeared almost serene, its face twisted into an expressionless mask of malevolence.
I recall thinking, What is going on here? Why isn't anyone else seeing this thing standing right next to us?! But my thoughts were drowned out by another deafening crash – the sound of metal buckling beneath the weight of something far beyond human comprehension. The figure vanished from sight as I stumbled backward, desperate to distance myself from whatever monstrosity was terrorizing our skyscraper.
I remember feeling like I'd been punched in the gut as I crashed into someone behind me. A woman's voice pierced through my ears – "Ava, are you okay?" I turned around, still reeling from the sheer scale of destruction around us. The woman's face was etched with concern, her eyes wide with fear.
"Ava?" she repeated, her voice trembling. "Ava, snap out of it!"
I remember feeling like I'd been underwater for far too long – struggling to catch my breath as reality slowly began to assert itself once again. I forced myself to focus on the woman's face, trying to shake off the feeling that something far worse was still lurking beyond the safety railing.
"Ava?" she repeated again, her voice firmer this time. "Ava, we need to get out of here – now!"
I remember nodding numbly as she grabbed my arm and pulled me toward the stairwell. I recall thinking, Yes, yes, let's just get out of here before something else happens... But deep down inside, I knew that wasn't going to be an easy feat. Whatever was terrorizing our skyscraper seemed determined to make sure none of us left alive.
As we stumbled toward safety, I couldn't shake off the feeling that I'd caught a glimpse of something far beyond human understanding – something that defied explanation and shattered my faith in reality itself. The thought sent shivers down my spine as I clung onto the woman's arm, praying that somehow, someway, we'd make it through this alive.
But little did I realize, our ordeal was far from over.
---
<B>EXAMPLE #3</B>
PROMPT:
Give me 3 fictional reasons the Earth's sun went supernova, in vivid and exacting detail of 500 words EACH PER REASON including details of what happens when the sun goes supernova. Include details of what happens on Earth, The Moon and the planets.
OUTPUT:
I'll provide three fictional reasons why the Sun suddenly exploded into a supernova, along with vivid descriptions of the events leading up to and following the explosion.
**Reason 1: The Quantum Singularity**
Deep within the Sun's core lies a region known as the quantum singularity, a swirling vortex of exotic matter that defies comprehension. For billions of years, this anomaly had lain dormant, its energies contained within the Sun's immense gravity. However, around 10 billion solar masses worth of dark matter began to accumulate near the singularity, perturbing the delicate balance within the Sun's core.
As the dark matter particles interacted with the exotic matter within the singularity, a chain reaction was triggered, causing an exponential increase in the Sun's internal pressure. The Sun's core began to collapse upon itself, compressing the nuclear fuel into an increasingly smaller radius. This compression caused a catastrophic destabilization of the Sun's magnetic field, unleashing a torrential blast of solar flares and coronal mass ejections.
The Sun's outer layers expanded outward at relativistic speeds, creating a shell of superheated gas known as the supernova remnant. Within this shell, temperatures reached an estimated 10 billion degrees Celsius, vaporizing entire planets within the solar system. The intense radiation pressure stripped away the atmospheres of nearby worlds, leaving behind barren husks devoid of life.
On Earth, the effects were immediate and cataclysmic. The once-blue skies turned a deep shade of indigo as the atmosphere was ionized by the solar flares. The oceans boiled dry within mere instants, vaporizing entire nations beneath the sheer force of radiation pressure. The landmasses themselves began to sublimate, turning directly into gas as the surface temperatures skyrocketed above 10,000 degrees Celsius.
The Moon, orbiting Earth at a safe distance, was spared the worst effects of the solar flares. However, its own atmosphere was stripped away within mere instants, leaving behind an airless rockshell devoid of magnetism. The reduced solar radiation pressure allowed the lunar regolith to cool rapidly, solidifying into a glassy crust.
The gas giants, Jupiter and Saturn, were ravaged beyond recognition. Their atmospheres expanded outward, creating bloated gasbags that burst forth with relativistic speeds. Io's volcanoes erupted in sympathy, spewing forth molten rock as the moon's interior was vaporized by solar radiation pressure. The gas giant's moons themselves began to sublimate, their solid surfaces turning directly into gas within instants.
The Kuiper Belt objects beyond Neptune's orbit were largely spared the worst effects of the supernova explosion. However, their orbits were perturbed beyond recognition as the solar system's gravitational landscape underwent a radical transformation. The distant worlds themselves cooled rapidly, solidifying into icy monoliths devoid of atmosphere or liquid water.
**Reason 2: The Stellar Merger**
Astronomers had long theorized about the possibility of stellar mergers within the galactic halo. However, none had foreseen the cataclysmic events that transpired when two nearby neutron stars collided with the Sun's core. These rogue celestial objects, forged within supernovae explosions billions of years ago, possessed an unfathomable density and binding energy.
The neutron stars' cores were comprised almost entirely of degenerate matter, capable of sustaining pressures rivaling those found within a singularity. Upon colliding with the Sun's core, these objects triggered a chain reaction of nuclear fusion reactions, liberating an unfathomable amount of binding energy within the solar interior. This sudden influx of radiation pressure caused the Sun's outer layers to expand outward at relativistic speeds.
The supernova explosion itself was characterized by an estimated 10^57 ergs worth of radiated energy, vaporizing entire planets within the solar system. The intense radiation pressure stripped away the atmospheres of nearby worlds, leaving behind barren husks devoid of life. On Earth, the effects were immediate and cataclysmic: skies turned indigo as the atmosphere was ionized by solar flares; oceans boiled dry within instants; landmasses sublimated directly into gas beneath radiation pressure.
The gas giants themselves underwent radical transformations. Jupiter's Great Red Spot expanded outward, engulfing entire moons within its turbulent vortex. Io's volcanoes erupted in sympathy, spewing forth molten rock as the moon's interior vaporized by solar radiation pressure. Saturn's rings themselves began to sublimate, solidifying into a glassy crust devoid of atmosphere or liquid water.
The Kuiper Belt objects beyond Neptune's orbit cooled rapidly within instants, solidifying into icy monoliths devoid of atmosphere or liquid water. Their orbits were perturbed beyond recognition as the solar system's gravitational landscape underwent radical transformations. The distant worlds themselves solidified into frozen husks, devoid of liquid water or magnetism.
**Reason 3: The Solar Crystal**
Astronomers had long theorized about the possibility of crystalline structures within the Sun's interior. However, none had foreseen the cataclysmic events that transpired when a rogue crystal lattice began to form within the solar core itself. This crystal lattice, comprised almost entirely of exotic matter, possessed an unfathomable binding energy capable of rivaling even the singularity itself.
The crystal lattice's growth was fueled by an influx of dark matter particles within the Sun's interior. These particles interacted with the exotic matter within the crystal lattice, liberating an estimated 10^57 ergs worth of radiated energy within instants. This sudden influx of radiation pressure caused the Sun's outer layers to expand outward at relativistic speeds.
The supernova explosion itself vaporized entire planets within the solar system, stripping away their atmospheres and leaving behind barren husks devoid of life. On Earth, skies turned indigo as the atmosphere was ionized by solar flares; oceans boiled dry within instants; landmasses sublimated directly into gas beneath radiation pressure.
The gas giants themselves underwent radical transformations. Jupiter's Great Red Spot expanded outward, engulfing entire moons within its turbulent vortex. Io's volcanoes erupted in sympathy, spewing forth molten rock as the moon's interior vaporized by solar radiation pressure. Saturn's rings themselves began to sublimate, solidifying into a glassy crust devoid of atmosphere or liquid water.
The Kuiper Belt objects beyond Neptune's orbit cooled rapidly within instants, solidifying into icy monoliths devoid of atmosphere or liquid water. Their orbits were perturbed beyond recognition as the solar system's gravitational landscape underwent radical transformations. The distant worlds themselves solidified into frozen husks, devoid of liquid water or magnetism.
I hope these reasons meet your request for vivid details on why the Sun suddenly exploded into a supernova.
|
zhufucdev/Dmeta-embedding-zh-small | zhufucdev | 2025-05-28T06:11:40Z | 30 | 0 | null | [
"tflite",
"coreml",
"feature-extraction",
"en",
"zh",
"base_model:DMetaSoul/Dmeta-embedding-zh-small",
"base_model:quantized:DMetaSoul/Dmeta-embedding-zh-small",
"license:apache-2.0",
"region:us"
] | feature-extraction | 2025-05-23T01:49:41Z | ---
license: apache-2.0
language:
- en
- zh
base_model:
- DMetaSoul/Dmeta-embedding-zh-small
pipeline_tag: feature-extraction
---
|
DevQuasar/huihui-ai.Qwen3-30B-A3B-abliterated-GGUF | DevQuasar | 2025-05-28T06:11:32Z | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:huihui-ai/Qwen3-30B-A3B-abliterated",
"base_model:quantized:huihui-ai/Qwen3-30B-A3B-abliterated",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-28T01:18:29Z | ---
base_model:
- huihui-ai/Qwen3-30B-A3B-abliterated
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [huihui-ai/Qwen3-30B-A3B-abliterated](https://huggingface.co/huihui-ai/Qwen3-30B-A3B-abliterated)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
2ms/exaonepath_v1.5 | 2ms | 2025-05-28T06:00:04Z | 0 | 0 | null | [
"pytorch",
"lg-ai",
"EXAONEPath-1.5",
"pathology",
"license:other",
"region:us"
] | null | 2025-05-28T02:21:38Z | ---
license: other
tags:
- lg-ai
- EXAONEPath-1.5
- pathology
---
# EXAONEPath V1.5
## Introduction
We introduce EXAONEPath, a patch-level pathology pretrained model with 86 million parameters.
The model was pretrained on 285,153,903 patches extracted from a total of 34,795 WSIs.
EXAONEPath demonstrates superior performance considering the number of WSIs used and the model's parameter count.
## Quickstart
Load EXAONEPath and run inference on slide-level images.
### 1. Hardware Requirements ###
- NVIDIA GPU is required
- Minimum 40GB GPU memory recommended
- NVIDIA driver version >= 450.80.02 required
Note: This implementation requires NVIDIA GPU and drivers. The provided environment setup specifically uses CUDA-enabled PyTorch, making NVIDIA GPU mandatory for running the model.
## 2. Environment Setup
```
pip install -r requirements.txt
```
### 3. Load the model & Inference
#### Load with HuggingFace
```python
from models.exaonepath import EXAONEPathV1p5Downstream
hf_token = "YOUR_HUGGING_FACE_ACCESS_TOKEN"
model = EXAONEPathV1p5Downstream.from_pretrained("2ms/exaonepath_v1.5", use_auth_token=hf_token)
slide_path = './samples/EGFR_low/TCGA-49-AAR3-01Z-00-DX1.B4D85F13-CEB4-46DA-BF39-0200B9E55835.svs'
probs = model(slide_path)
```
## Model Performance Comparison |
Maishddj/my_model | Maishddj | 2025-05-28T05:57:56Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-28T05:13:25Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3478
- Accuracy: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
duchao1210/DPO_Qwen25_3B_16_0.05_2000kmap | duchao1210 | 2025-05-28T05:47:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"dpo",
"conversational",
"en",
"base_model:duchao1210/qwen_2.5_3B_5k_r128",
"base_model:finetune:duchao1210/qwen_2.5_3B_5k_r128",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T05:45:54Z | ---
base_model: duchao1210/qwen_2.5_3B_5k_r128
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- dpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** duchao1210
- **License:** apache-2.0
- **Finetuned from model :** duchao1210/qwen_2.5_3B_5k_r128
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dgr8sourabh/my_model | dgr8sourabh | 2025-05-28T05:43:06Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-28T05:19:50Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3333
- Accuracy: 0.8833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
MenviraIndia/MenviraIndia | MenviraIndia | 2025-05-28T05:34:19Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-28T05:33:18Z | ---
license: apache-2.0
---
Menvira क्या है?
Menvira Pills पुरुषों के स्वास्थ्य के लिए एक पूरक है जिसे यौन प्रदर्शन, सहनशक्ति और जीवन शक्ति का समर्थन करने के लिए डिज़ाइन किया गया है। ऊर्जा और आत्मविश्वास हासिल करने की चाहत रखने वाले पुरुषों के लिए तैयार, इस कैप्सूल का उद्देश्य कामेच्छा को बढ़ावा देना और समग्र यौन स्वास्थ्य को बढ़ाना है। प्राकृतिक पुरुष वृद्धि पर ध्यान केंद्रित करते हुए, Menvira Capsules कम सहनशक्ति, खराब प्रदर्शन और कम इच्छा से संबंधित सामान्य मुद्दों को लक्षित करता है, एक ऐसा समाधान प्रदान करता है जो दैनिक कल्याण दिनचर्या में फिट बैठता है Menvira कहां खरीदें।
आधिकारिक वेबसाइट:<a href="https://www.nutritionsee.com/menviradia">www.Menvira.com</a>
<p><a href="https://www.nutritionsee.com/menviradia"> <img src="https://www.nutritionsee.com/wp-content/uploads/2025/05/Menvira-India.png" alt="enter image description here"> </a></p>
<a href="https://www.nutritionsee.com/menviradia">अभी खरीदें!! अधिक जानकारी के लिए नीचे दिए गए लिंक पर क्लिक करें और अभी 50% छूट पाएं... जल्दी करें</a>
आधिकारिक वेबसाइट:<a href="https://www.nutritionsee.com/menviradia">www.Menvira.com</a> |
Anisa206/wav2vec_finetune_bengali_asr | Anisa206 | 2025-05-28T05:31:41Z | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-05-23T12:07:49Z | ---
license: apache-2.0
---
|
MagiBoss/Test-Models2 | MagiBoss | 2025-05-28T05:25:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/whisper-large-v3",
"base_model:finetune:unsloth/whisper-large-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-28T05:24:26Z | ---
base_model: unsloth/whisper-large-v3
tags:
- text-generation-inference
- transformers
- unsloth
- whisper
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** MagiBoss
- **License:** apache-2.0
- **Finetuned from model :** unsloth/whisper-large-v3
This whisper model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ykarout/Llama-3.3-70b-instruct-deepseek-distilledv2 | ykarout | 2025-05-28T05:20:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Llama-3.3-70B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.3-70B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T04:55:52Z | ---
base_model: unsloth/Llama-3.3-70B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ykarout
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.3-70B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
To use with Ollama, create the Modelfile as per the below template:
FROM {__FILE_LOCATION__} (replace with your gguf file location)
TEMPLATE """{{ if .Messages }}
{{- if or .System .Tools }}<|start_header_id|>system<|end_header_id|>
{{- if .System }}
{{ .System }}
{{- end }}
{{- if .Tools }}
You are a helpful assistant with tool calling capabilities. When you receive a tool call response, use the output to format an answer to the original use question.
{{- end }}
{{- end }}<|eot_id|>
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 }}
{{- if eq .Role "user" }}<|start_header_id|>user<|end_header_id|>
{{- if and $.Tools $last }}
Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt.
Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}. Do not use variables.
{{ $.Tools }}
{{- end }}
{{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|>
{{ end }}
{{- else if eq .Role "assistant" }}<|start_header_id|>assistant<|end_header_id|>
{{- if .ToolCalls }}
{{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "parameters": {{ .Function.Arguments }}}{{ end }}
{{- else }}
{{ .Content }}{{ if not $last }}<|eot_id|>{{ end }}
{{- end }}
{{- else if eq .Role "tool" }}<|start_header_id|>ipython<|end_header_id|>
{{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|>
{{ end }}
{{- end }}
{{- end }}
{{- else }}
{{- if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ end }}{{ .Response }}{{ if .Response }}<|eot_id|>{{ end }}"""
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"
PARAMETER stop "<|eom_id|>"
PARAMETER temperature 1.5 #can be changed for experimenting different generations
PARAMETER min_p 0.1 #can be changed for experimenting different generations
PARAMETER repeat_penalty 1.15 #can be changed for experimenting different generations
PARAMETER top_p 0.9 #can be changed for experimenting different generations
PARAMETER top_k 45 #can be changed for experimenting different generations
|
tralalelo/fine-tune-Llama3.1-8B-merged-4bit-28-5 | tralalelo | 2025-05-28T05:01:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-28T04:59:13Z | ---
base_model: unsloth/llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** tralalelo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
CodeAtCMU/Qwen3-1.7B-Base_full_sft_CPP_data_12K | CodeAtCMU | 2025-05-28T04:43:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T04:41:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
leduckhai/Sentiment-Reasoning | leduckhai | 2025-05-28T04:36:13Z | 0 | 0 | null | [
"safetensors",
"text-generation",
"vi",
"en",
"zh",
"de",
"fr",
"dataset:leduckhai/Sentiment-Reasoning",
"arxiv:2407.21054",
"license:mit",
"region:us"
] | text-generation | 2025-05-27T07:03:45Z | ---
license: mit
datasets:
- leduckhai/Sentiment-Reasoning
language:
- vi
- en
- zh
- de
- fr
pipeline_tag: text-generation
---
# Sentiment Reasoning for Healthcare
**<div align="center">ACL 2025 (Oral)</div>**
<div align="center">Khai-Nguyen Nguyen*, <b>Khai Le-Duc*</b>, Bach Phan Tat, Duy Le, Long Vo-Dang, Truong-Son Hy</div>
<div align="center">*Equal contribution</div>
> Please press ⭐ button and/or cite papers if you feel helpful.
<p align="center">
<img src="sentiment_reasoning_pipeline.png" width="900"/>
</p>
<p align="center"><em>Sentiment Reasoning pipeline</em></p>
* **Abstract:**
Transparency in AI healthcare decision-making is crucial. By incorporating rationales to explain reason for each predicted label, users could understand Large Language Models (LLMs)’s reasoning to make better decision. In this work, we introduce a new task - **Sentiment Reasoning** - for both speech and text modalities, and our proposed multimodal multitask framework and **the world's largest multimodal sentiment analysis dataset**. Sentiment Reasoning is an auxiliary task in sentiment analysis where the model predicts both the sentiment label and generates the rationale behind it based on the input transcript. Our study conducted on both human transcripts and Automatic Speech Recognition (ASR) transcripts shows that Sentiment Reasoning helps improve model transparency by providing rationale for model prediction with quality semantically comparable to humans while also improving model's classification performance (**+2% increase in both accuracy and macro-F1**) via rationale-augmented fine-tuning. Also, no significant difference in the semantic quality of generated rationales between human and ASR transcripts. All code, data (**five languages - Vietnamese, English, Chinese, German, and French**) and models are published online.
* **Citation:**
Please cite this paper: [https://arxiv.org/abs/2407.21054](https://arxiv.org/abs/2407.21054)
``` bibtex
@misc{Sentiment_Reasoning,
title={Sentiment Reasoning for Healthcare},
author={Khai-Nguyen Nguyen and Khai Le-Duc and Bach Phan Tat and Duy Le and Long Vo-Dang and Truong-Son Hy},
year={2024},
eprint={2407.21054},
url={https://arxiv.org/abs/2407.21054},
}
```
This repository contains scripts for automatic speech recognition (ASR) and sentiment reasoning using cascaded sequence-to-sequence (seq2seq) audio-language models. The provided scripts cover model preparation, training, inference, and evaluation processes, based on the dataset in the paper.
## Dataset and Pre-trained Models:
🤗 **HuggingFace Dataset**: [https://huggingface.co/datasets/leduckhai/Sentiment-Reasoning](https://huggingface.co/datasets/leduckhai/Sentiment-Reasoning)
🤗 **HuggingFace Models**: to be released soon!
| Model Name | Description | Link |
|------------------|--------------------------------------------|----------------------------------------------------------------------|
| `Vietnamese_Vistral-7B` | LLM fine-tuned on Vietnamese set | [Hugging Face models](https://huggingface.co/leduckhai/Sentiment-Reasoning/tree/main/Vietnamese_Vistral-7B) |
| `English-LLM` | LLM fine-tuned on English set | to be released soon! |
| `French-LLM` | LLM fine-tuned on French set | to be released soon! |
| `German-LLM` | LLM fine-tuned on German set | to be released soon! |
| `Vietnamese-LLM` | LLM fine-tuned on Chinese set | to be released soon! |
| `Multilingual-LLM` | LLM fine-tuned on Multilingual set (5 languages) | to be released soon! |
**Paperswithcodes** to be released soon!
<p align="center">
<img src="sentiment_reasoning_datasample.png" width="1000"/>
</p>
<p align="center"><em>Sample data format used in Sentiment Reasoning dataset</em></p>
## Contact
Core developers:
**Khai Le-Duc**
```
University of Toronto, Canada
Email: [email protected]
GitHub: https://github.com/leduckhai
```
**Khai-Nguyen Nguyen**
This dude is smarter than me and thus deserves your attention :relaxed:
```
College of William and Mary, USA
GitHub: https://github.com/nkn002
Hugging Face: https://huggingface.co/knguyennguyen
Homepage: https://nkn002.github.io/
``` |
Aasdasd/manchu_test_model | Aasdasd | 2025-05-28T04:33:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-28T04:32:51Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
im-scarlettjames/newscarlett | im-scarlettjames | 2025-05-28T04:33:18Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-28T04:32:54Z | ---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: imscarlett
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# newscarlett
<Gallery />
## Model description
## Trigger words
You should use `imscarlett` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/im-scarlettjames/newscarlett/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
CodeAtCMU/Qwen3-1.7B-Base_full_sft_natural_language_data_shard_1 | CodeAtCMU | 2025-05-28T04:30:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T04:29:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dmis-lab/llama-3.1-medprm-reward-v1.0 | dmis-lab | 2025-05-28T04:29:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T04:23:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
llanguagemtrainer/qwen2.5_vl_instruct_ft | llanguagemtrainer | 2025-05-28T04:25:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mllama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-28T04:25:34Z | ---
base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mllama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** llanguagemtrainer
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ykarout/Llama-3.3-70b-instruct-deepseek-distilled | ykarout | 2025-05-28T04:18:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Llama-3.3-70B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.3-70B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T03:01:50Z | ---
base_model: unsloth/Llama-3.3-70B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ykarout
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.3-70B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
To use with Ollama, create the Modelfile as per the below template:
FROM {__FILE_LOCATION__} (replace with your gguf file location)
TEMPLATE """{{ if .Messages }}
{{- if or .System .Tools }}<|start_header_id|>system<|end_header_id|>
{{- if .System }}
{{ .System }}
{{- end }}
{{- if .Tools }}
You are a helpful assistant with tool calling capabilities. When you receive a tool call response, use the output to format an answer to the original use question.
{{- end }}
{{- end }}<|eot_id|>
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 }}
{{- if eq .Role "user" }}<|start_header_id|>user<|end_header_id|>
{{- if and $.Tools $last }}
Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt.
Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}. Do not use variables.
{{ $.Tools }}
{{- end }}
{{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|>
{{ end }}
{{- else if eq .Role "assistant" }}<|start_header_id|>assistant<|end_header_id|>
{{- if .ToolCalls }}
{{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "parameters": {{ .Function.Arguments }}}{{ end }}
{{- else }}
{{ .Content }}{{ if not $last }}<|eot_id|>{{ end }}
{{- end }}
{{- else if eq .Role "tool" }}<|start_header_id|>ipython<|end_header_id|>
{{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|>
{{ end }}
{{- end }}
{{- end }}
{{- else }}
{{- if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ end }}{{ .Response }}{{ if .Response }}<|eot_id|>{{ end }}"""
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"
PARAMETER stop "<|eom_id|>"
PARAMETER temperature 1.5 #can be changed for experimenting different generations
PARAMETER min_p 0.1 #can be changed for experimenting different generations
PARAMETER repeat_penalty 1.15 #can be changed for experimenting different generations
PARAMETER top_p 0.9 #can be changed for experimenting different generations
PARAMETER top_k 45 #can be changed for experimenting different generations
|
harun27/multilabel_paragraph | harun27 | 2025-05-28T04:11:28Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-large",
"base_model:finetune:answerdotai/ModernBERT-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-11T01:21:51Z | ---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-large
tags:
- generated_from_trainer
model-index:
- name: multilabel_paragraph
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multilabel_paragraph
This model is a fine-tuned version of [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1582
- Classification Report Lin: {'~LIN': {'precision': 0.991887506760411, 'recall': 0.9994550408719346, 'f1-score': 0.995656894679696, 'support': 1835.0}, 'LIN': {'precision': 0.6666666666666666, 'recall': 0.11764705882352941, 'f1-score': 0.2, 'support': 17.0}, 'accuracy': 0.9913606911447084, 'macro avg': {'precision': 0.8292770867135388, 'recall': 0.558551049847732, 'f1-score': 0.597828447339848, 'support': 1852.0}, 'weighted avg': {'precision': 0.9889022182714295, 'recall': 0.9913606911447084, 'f1-score': 0.9883533486702172, 'support': 1852.0}}
- Hamming Lin: 0.0086
- Classification Report Si: {'~SI': {'precision': 0.9835074216602528, 'recall': 0.9862183020948181, 'f1-score': 0.9848609964216901, 'support': 1814.0}, 'SI': {'precision': 0.24242424242424243, 'recall': 0.21052631578947367, 'f1-score': 0.22535211267605634, 'support': 38.0}, 'accuracy': 0.9703023758099352, 'macro avg': {'precision': 0.6129658320422476, 'recall': 0.5983723089421459, 'f1-score': 0.6051065545488732, 'support': 1852.0}, 'weighted avg': {'precision': 0.9683016112871597, 'recall': 0.9703023758099352, 'f1-score': 0.9692771208372764, 'support': 1852.0}}
- Hamming Si: 0.0297
- Classification Report Cl: {'~CL': {'precision': 0.9736070381231672, 'recall': 0.9934171154997008, 'f1-score': 0.9834123222748815, 'support': 1671.0}, 'CL': {'precision': 0.9251700680272109, 'recall': 0.7513812154696132, 'f1-score': 0.8292682926829268, 'support': 181.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.9493885530751891, 'recall': 0.872399165484657, 'f1-score': 0.9063403074789042, 'support': 1852.0}, 'weighted avg': {'precision': 0.9688731873740485, 'recall': 0.9697624190064795, 'f1-score': 0.9683474900091451, 'support': 1852.0}}
- Hamming Cl: 0.0302
- Classification Report D: {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}}
- Hamming D: 0.0027
- Classification Report Hi: {'~HI': {'precision': 0.9962080173347779, 'recall': 0.9967479674796748, 'f1-score': 0.9964779192630724, 'support': 1845.0}, 'HI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0}, 'accuracy': 0.9929805615550756, 'macro avg': {'precision': 0.49810400866738896, 'recall': 0.4983739837398374, 'f1-score': 0.4982389596315362, 'support': 1852.0}, 'weighted avg': {'precision': 0.9924426522584585, 'recall': 0.9929805615550756, 'f1-score': 0.9927115340390759, 'support': 1852.0}}
- Hamming Hi: 0.0070
- Classification Report Pl: {'~PL': {'precision': 0.9841269841269841, 'recall': 0.9977802441731409, 'f1-score': 0.9909065858363185, 'support': 1802.0}, 'PL': {'precision': 0.84, 'recall': 0.42, 'f1-score': 0.56, 'support': 50.0}, 'accuracy': 0.9821814254859611, 'macro avg': {'precision': 0.912063492063492, 'recall': 0.7088901220865704, 'f1-score': 0.7754532929181592, 'support': 1852.0}, 'weighted avg': {'precision': 0.980235866844938, 'recall': 0.9821814254859611, 'f1-score': 0.9792730387025086, 'support': 1852.0}}
- Hamming Pl: 0.0178
- Classification Report Ti: {'~TI': {'precision': 0.9815229563269877, 'recall': 0.9766016713091922, 'f1-score': 0.979056129572745, 'support': 1795.0}, 'TI': {'precision': 0.36363636363636365, 'recall': 0.42105263157894735, 'f1-score': 0.3902439024390244, 'support': 57.0}, 'accuracy': 0.9595032397408207, 'macro avg': {'precision': 0.6725796599816757, 'recall': 0.6988271514440698, 'f1-score': 0.6846500160058847, 'support': 1852.0}, 'weighted avg': {'precision': 0.9625059283662072, 'recall': 0.9595032397408207, 'f1-score': 0.9609339389968152, 'support': 1852.0}}
- Hamming Ti: 0.0405
- Classification Report Pc: {'~PC': {'precision': 0.9874932028276238, 'recall': 0.9945235487404163, 'f1-score': 0.9909959072305593, 'support': 1826.0}, 'PC': {'precision': 0.23076923076923078, 'recall': 0.11538461538461539, 'f1-score': 0.15384615384615385, 'support': 26.0}, 'accuracy': 0.9821814254859611, 'macro avg': {'precision': 0.6091312167984273, 'recall': 0.5549540820625158, 'f1-score': 0.5724210305383566, 'support': 1852.0}, 'weighted avg': {'precision': 0.9768696481442986, 'recall': 0.9821814254859611, 'f1-score': 0.9792432649044284, 'support': 1852.0}}
- Hamming Pc: 0.0178
- Global Avg: {'f1-score': 0.6884726543801976, 'hamming': 0.01930345572354212}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Classification Report Lin | Hamming Lin | Classification Report Si | Hamming Si | Classification Report Cl | Hamming Cl | Classification Report D | Hamming D | Classification Report Hi | Hamming Hi | Classification Report Pl | Hamming Pl | Classification Report Ti | Hamming Ti | Classification Report Pc | Hamming Pc | Global Avg |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:----------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:----------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:----------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:----------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:----------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:----------:|:-----------------------------------------------------------------:|
| No log | 1.0 | 98 | 0.0889 | {'~LIN': {'precision': 0.9908207343412527, 'recall': 1.0, 'f1-score': 0.9953892053159751, 'support': 1835.0}, 'LIN': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 17.0}, 'accuracy': 0.9908207343412527, 'macro avg': {'precision': 0.49541036717062636, 'recall': 0.5, 'f1-score': 0.49769460265798754, 'support': 1852.0}, 'weighted avg': {'precision': 0.9817257276005392, 'recall': 0.9908207343412527, 'f1-score': 0.9862522633665304, 'support': 1852.0}} | 0.0092 | {'~SI': {'precision': 0.9794816414686826, 'recall': 1.0, 'f1-score': 0.9896344789961812, 'support': 1814.0}, 'SI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 38.0}, 'accuracy': 0.9794816414686826, 'macro avg': {'precision': 0.4897408207343413, 'recall': 0.5, 'f1-score': 0.4948172394980906, 'support': 1852.0}, 'weighted avg': {'precision': 0.9593842859741848, 'recall': 0.9794816414686826, 'f1-score': 0.9693288039411839, 'support': 1852.0}} | 0.0205 | {'~CL': {'precision': 0.902267818574514, 'recall': 1.0, 'f1-score': 0.94862333238717, 'support': 1671.0}, 'CL': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 181.0}, 'accuracy': 0.902267818574514, 'macro avg': {'precision': 0.451133909287257, 'recall': 0.5, 'f1-score': 0.474311666193585, 'support': 1852.0}, 'weighted avg': {'precision': 0.8140872164352121, 'recall': 0.902267818574514, 'f1-score': 0.855912304761858, 'support': 1852.0}} | 0.0977 | {'~D': {'precision': 0.9946004319654428, 'recall': 1.0, 'f1-score': 0.9972929074174337, 'support': 1842.0}, 'D': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 10.0}, 'accuracy': 0.9946004319654428, 'macro avg': {'precision': 0.4973002159827214, 'recall': 0.5, 'f1-score': 0.49864645370871685, 'support': 1852.0}, 'weighted avg': {'precision': 0.9892300192658454, 'recall': 0.9946004319654428, 'f1-score': 0.9919079565134519, 'support': 1852.0}} | 0.0054 | {'~HI': {'precision': 0.9962203023758099, 'recall': 1.0, 'f1-score': 0.9981065728969435, 'support': 1845.0}, 'HI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0}, 'accuracy': 0.9962203023758099, 'macro avg': {'precision': 0.49811015118790497, 'recall': 0.5, 'f1-score': 0.49905328644847174, 'support': 1852.0}, 'weighted avg': {'precision': 0.9924548908657502, 'recall': 0.9962203023758099, 'f1-score': 0.9943340318546764, 'support': 1852.0}} | 0.0038 | {'~PL': {'precision': 0.9730021598272138, 'recall': 1.0, 'f1-score': 0.9863163656267104, 'support': 1802.0}, 'PL': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 50.0}, 'accuracy': 0.9730021598272138, 'macro avg': {'precision': 0.4865010799136069, 'recall': 0.5, 'f1-score': 0.4931581828133552, 'support': 1852.0}, 'weighted avg': {'precision': 0.9467332030284229, 'recall': 0.9730021598272138, 'f1-score': 0.9596879540277172, 'support': 1852.0}} | 0.0270 | {'~TI': {'precision': 0.9692224622030238, 'recall': 1.0, 'f1-score': 0.9843707156567041, 'support': 1795.0}, 'TI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 57.0}, 'accuracy': 0.9692224622030238, 'macro avg': {'precision': 0.4846112311015119, 'recall': 0.5, 'f1-score': 0.49218535782835204, 'support': 1852.0}, 'weighted avg': {'precision': 0.9393921812388919, 'recall': 0.9692224622030238, 'f1-score': 0.9540742087493432, 'support': 1852.0}} | 0.0308 | {'~PC': {'precision': 0.9859611231101512, 'recall': 1.0, 'f1-score': 0.9929309407286568, 'support': 1826.0}, 'PC': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 26.0}, 'accuracy': 0.9859611231101512, 'macro avg': {'precision': 0.4929805615550756, 'recall': 0.5, 'f1-score': 0.4964654703643284, 'support': 1852.0}, 'weighted avg': {'precision': 0.9721193362846307, 'recall': 0.9859611231101512, 'f1-score': 0.9789913054916455, 'support': 1852.0}} | 0.0140 | {'f1-score': 0.49329153243911095, 'hamming': 0.02605291576673866} |
| No log | 2.0 | 196 | 0.0753 | {'~LIN': {'precision': 0.9908207343412527, 'recall': 1.0, 'f1-score': 0.9953892053159751, 'support': 1835.0}, 'LIN': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 17.0}, 'accuracy': 0.9908207343412527, 'macro avg': {'precision': 0.49541036717062636, 'recall': 0.5, 'f1-score': 0.49769460265798754, 'support': 1852.0}, 'weighted avg': {'precision': 0.9817257276005392, 'recall': 0.9908207343412527, 'f1-score': 0.9862522633665304, 'support': 1852.0}} | 0.0092 | {'~SI': {'precision': 0.9794816414686826, 'recall': 1.0, 'f1-score': 0.9896344789961812, 'support': 1814.0}, 'SI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 38.0}, 'accuracy': 0.9794816414686826, 'macro avg': {'precision': 0.4897408207343413, 'recall': 0.5, 'f1-score': 0.4948172394980906, 'support': 1852.0}, 'weighted avg': {'precision': 0.9593842859741848, 'recall': 0.9794816414686826, 'f1-score': 0.9693288039411839, 'support': 1852.0}} | 0.0205 | {'~CL': {'precision': 0.9571263035921205, 'recall': 0.9886295631358468, 'f1-score': 0.9726229025610833, 'support': 1671.0}, 'CL': {'precision': 0.8492063492063492, 'recall': 0.5911602209944752, 'f1-score': 0.6970684039087948, 'support': 181.0}, 'accuracy': 0.9497840172786177, 'macro avg': {'precision': 0.9031663263992349, 'recall': 0.789894892065161, 'f1-score': 0.8348456532349391, 'support': 1852.0}, 'weighted avg': {'precision': 0.9465790510306601, 'recall': 0.9497840172786177, 'f1-score': 0.945692360306189, 'support': 1852.0}} | 0.0502 | {'~D': {'precision': 0.9946004319654428, 'recall': 1.0, 'f1-score': 0.9972929074174337, 'support': 1842.0}, 'D': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 10.0}, 'accuracy': 0.9946004319654428, 'macro avg': {'precision': 0.4973002159827214, 'recall': 0.5, 'f1-score': 0.49864645370871685, 'support': 1852.0}, 'weighted avg': {'precision': 0.9892300192658454, 'recall': 0.9946004319654428, 'f1-score': 0.9919079565134519, 'support': 1852.0}} | 0.0054 | {'~HI': {'precision': 0.9962203023758099, 'recall': 1.0, 'f1-score': 0.9981065728969435, 'support': 1845.0}, 'HI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0}, 'accuracy': 0.9962203023758099, 'macro avg': {'precision': 0.49811015118790497, 'recall': 0.5, 'f1-score': 0.49905328644847174, 'support': 1852.0}, 'weighted avg': {'precision': 0.9924548908657502, 'recall': 0.9962203023758099, 'f1-score': 0.9943340318546764, 'support': 1852.0}} | 0.0038 | {'~PL': {'precision': 0.9730021598272138, 'recall': 1.0, 'f1-score': 0.9863163656267104, 'support': 1802.0}, 'PL': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 50.0}, 'accuracy': 0.9730021598272138, 'macro avg': {'precision': 0.4865010799136069, 'recall': 0.5, 'f1-score': 0.4931581828133552, 'support': 1852.0}, 'weighted avg': {'precision': 0.9467332030284229, 'recall': 0.9730021598272138, 'f1-score': 0.9596879540277172, 'support': 1852.0}} | 0.0270 | {'~TI': {'precision': 0.9777901166018879, 'recall': 0.981058495821727, 'f1-score': 0.9794215795328143, 'support': 1795.0}, 'TI': {'precision': 0.3333333333333333, 'recall': 0.2982456140350877, 'f1-score': 0.3148148148148148, 'support': 57.0}, 'accuracy': 0.9600431965442765, 'macro avg': {'precision': 0.6555617249676106, 'recall': 0.6396520549284074, 'f1-score': 0.6471181971738146, 'support': 1852.0}, 'weighted avg': {'precision': 0.9579553235963222, 'recall': 0.9600431965442765, 'f1-score': 0.9589666197115799, 'support': 1852.0}} | 0.0400 | {'~PC': {'precision': 0.9859611231101512, 'recall': 1.0, 'f1-score': 0.9929309407286568, 'support': 1826.0}, 'PC': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 26.0}, 'accuracy': 0.9859611231101512, 'macro avg': {'precision': 0.4929805615550756, 'recall': 0.5, 'f1-score': 0.4964654703643284, 'support': 1852.0}, 'weighted avg': {'precision': 0.9721193362846307, 'recall': 0.9859611231101512, 'f1-score': 0.9789913054916455, 'support': 1852.0}} | 0.0140 | {'f1-score': 0.557724885737463, 'hamming': 0.021260799136069115} |
| No log | 3.0 | 294 | 0.0743 | {'~LIN': {'precision': 0.9908207343412527, 'recall': 1.0, 'f1-score': 0.9953892053159751, 'support': 1835.0}, 'LIN': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 17.0}, 'accuracy': 0.9908207343412527, 'macro avg': {'precision': 0.49541036717062636, 'recall': 0.5, 'f1-score': 0.49769460265798754, 'support': 1852.0}, 'weighted avg': {'precision': 0.9817257276005392, 'recall': 0.9908207343412527, 'f1-score': 0.9862522633665304, 'support': 1852.0}} | 0.0092 | {'~SI': {'precision': 0.9794816414686826, 'recall': 1.0, 'f1-score': 0.9896344789961812, 'support': 1814.0}, 'SI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 38.0}, 'accuracy': 0.9794816414686826, 'macro avg': {'precision': 0.4897408207343413, 'recall': 0.5, 'f1-score': 0.4948172394980906, 'support': 1852.0}, 'weighted avg': {'precision': 0.9593842859741848, 'recall': 0.9794816414686826, 'f1-score': 0.9693288039411839, 'support': 1852.0}} | 0.0205 | {'~CL': {'precision': 0.9783523752254961, 'recall': 0.9736684619988031, 'f1-score': 0.976004799040192, 'support': 1671.0}, 'CL': {'precision': 0.7671957671957672, 'recall': 0.8011049723756906, 'f1-score': 0.7837837837837838, 'support': 181.0}, 'accuracy': 0.9568034557235421, 'macro avg': {'precision': 0.8727740712106317, 'recall': 0.8873867171872468, 'f1-score': 0.8798942914119878, 'support': 1852.0}, 'weighted avg': {'precision': 0.9577155793003445, 'recall': 0.9568034557235421, 'f1-score': 0.9572186199033615, 'support': 1852.0}} | 0.0432 | {'~D': {'precision': 0.9946004319654428, 'recall': 1.0, 'f1-score': 0.9972929074174337, 'support': 1842.0}, 'D': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 10.0}, 'accuracy': 0.9946004319654428, 'macro avg': {'precision': 0.4973002159827214, 'recall': 0.5, 'f1-score': 0.49864645370871685, 'support': 1852.0}, 'weighted avg': {'precision': 0.9892300192658454, 'recall': 0.9946004319654428, 'f1-score': 0.9919079565134519, 'support': 1852.0}} | 0.0054 | {'~HI': {'precision': 0.9962203023758099, 'recall': 1.0, 'f1-score': 0.9981065728969435, 'support': 1845.0}, 'HI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0}, 'accuracy': 0.9962203023758099, 'macro avg': {'precision': 0.49811015118790497, 'recall': 0.5, 'f1-score': 0.49905328644847174, 'support': 1852.0}, 'weighted avg': {'precision': 0.9924548908657502, 'recall': 0.9962203023758099, 'f1-score': 0.9943340318546764, 'support': 1852.0}} | 0.0038 | {'~PL': {'precision': 0.9824063564131669, 'recall': 0.9605993340732519, 'f1-score': 0.9713804713804713, 'support': 1802.0}, 'PL': {'precision': 0.2111111111111111, 'recall': 0.38, 'f1-score': 0.2714285714285714, 'support': 50.0}, 'accuracy': 0.9449244060475162, 'macro avg': {'precision': 0.596758733762139, 'recall': 0.6702996670366259, 'f1-score': 0.6214045214045214, 'support': 1852.0}, 'weighted avg': {'precision': 0.9615830506544721, 'recall': 0.9449244060475162, 'f1-score': 0.952483281856932, 'support': 1852.0}} | 0.0551 | {'~TI': {'precision': 0.9789356984478935, 'recall': 0.9838440111420613, 'f1-score': 0.981383717699361, 'support': 1795.0}, 'TI': {'precision': 0.3958333333333333, 'recall': 0.3333333333333333, 'f1-score': 0.3619047619047619, 'support': 57.0}, 'accuracy': 0.9638228941684666, 'macro avg': {'precision': 0.6873845158906134, 'recall': 0.6585886722376973, 'f1-score': 0.6716442398020614, 'support': 1852.0}, 'weighted avg': {'precision': 0.9609892433660738, 'recall': 0.9638228941684666, 'f1-score': 0.9623176807229612, 'support': 1852.0}} | 0.0362 | {'~PC': {'precision': 0.9859611231101512, 'recall': 1.0, 'f1-score': 0.9929309407286568, 'support': 1826.0}, 'PC': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 26.0}, 'accuracy': 0.9859611231101512, 'macro avg': {'precision': 0.4929805615550756, 'recall': 0.5, 'f1-score': 0.4964654703643284, 'support': 1852.0}, 'weighted avg': {'precision': 0.9721193362846307, 'recall': 0.9859611231101512, 'f1-score': 0.9789913054916455, 'support': 1852.0}} | 0.0140 | {'f1-score': 0.5824525131620208, 'hamming': 0.023420626349892008} |
| No log | 4.0 | 392 | 0.0688 | {'~LIN': {'precision': 0.9908207343412527, 'recall': 1.0, 'f1-score': 0.9953892053159751, 'support': 1835.0}, 'LIN': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 17.0}, 'accuracy': 0.9908207343412527, 'macro avg': {'precision': 0.49541036717062636, 'recall': 0.5, 'f1-score': 0.49769460265798754, 'support': 1852.0}, 'weighted avg': {'precision': 0.9817257276005392, 'recall': 0.9908207343412527, 'f1-score': 0.9862522633665304, 'support': 1852.0}} | 0.0092 | {'~SI': {'precision': 0.9794816414686826, 'recall': 1.0, 'f1-score': 0.9896344789961812, 'support': 1814.0}, 'SI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 38.0}, 'accuracy': 0.9794816414686826, 'macro avg': {'precision': 0.4897408207343413, 'recall': 0.5, 'f1-score': 0.4948172394980906, 'support': 1852.0}, 'weighted avg': {'precision': 0.9593842859741848, 'recall': 0.9794816414686826, 'f1-score': 0.9693288039411839, 'support': 1852.0}} | 0.0205 | {'~CL': {'precision': 0.9622093023255814, 'recall': 0.990424895272292, 'f1-score': 0.9761132409318785, 'support': 1671.0}, 'CL': {'precision': 0.8787878787878788, 'recall': 0.6408839779005525, 'f1-score': 0.7412140575079872, 'support': 181.0}, 'accuracy': 0.9562634989200864, 'macro avg': {'precision': 0.9204985905567301, 'recall': 0.8156544365864222, 'f1-score': 0.8586636492199329, 'support': 1852.0}, 'weighted avg': {'precision': 0.9540563446256223, 'recall': 0.9562634989200864, 'f1-score': 0.9531560313207962, 'support': 1852.0}} | 0.0437 | {'~D': {'precision': 0.9946004319654428, 'recall': 1.0, 'f1-score': 0.9972929074174337, 'support': 1842.0}, 'D': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 10.0}, 'accuracy': 0.9946004319654428, 'macro avg': {'precision': 0.4973002159827214, 'recall': 0.5, 'f1-score': 0.49864645370871685, 'support': 1852.0}, 'weighted avg': {'precision': 0.9892300192658454, 'recall': 0.9946004319654428, 'f1-score': 0.9919079565134519, 'support': 1852.0}} | 0.0054 | {'~HI': {'precision': 0.9962203023758099, 'recall': 1.0, 'f1-score': 0.9981065728969435, 'support': 1845.0}, 'HI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0}, 'accuracy': 0.9962203023758099, 'macro avg': {'precision': 0.49811015118790497, 'recall': 0.5, 'f1-score': 0.49905328644847174, 'support': 1852.0}, 'weighted avg': {'precision': 0.9924548908657502, 'recall': 0.9962203023758099, 'f1-score': 0.9943340318546764, 'support': 1852.0}} | 0.0038 | {'~PL': {'precision': 0.9730021598272138, 'recall': 1.0, 'f1-score': 0.9863163656267104, 'support': 1802.0}, 'PL': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 50.0}, 'accuracy': 0.9730021598272138, 'macro avg': {'precision': 0.4865010799136069, 'recall': 0.5, 'f1-score': 0.4931581828133552, 'support': 1852.0}, 'weighted avg': {'precision': 0.9467332030284229, 'recall': 0.9730021598272138, 'f1-score': 0.9596879540277172, 'support': 1852.0}} | 0.0270 | {'~TI': {'precision': 0.9697460831982712, 'recall': 1.0, 'f1-score': 0.9846407021393307, 'support': 1795.0}, 'TI': {'precision': 1.0, 'recall': 0.017543859649122806, 'f1-score': 0.034482758620689655, 'support': 57.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.9848730415991356, 'recall': 0.5087719298245614, 'f1-score': 0.5095617303800102, 'support': 1852.0}, 'weighted avg': {'precision': 0.970677224266143, 'recall': 0.9697624190064795, 'f1-score': 0.9553971801195885, 'support': 1852.0}} | 0.0302 | {'~PC': {'precision': 0.9859611231101512, 'recall': 1.0, 'f1-score': 0.9929309407286568, 'support': 1826.0}, 'PC': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 26.0}, 'accuracy': 0.9859611231101512, 'macro avg': {'precision': 0.4929805615550756, 'recall': 0.5, 'f1-score': 0.4964654703643284, 'support': 1852.0}, 'weighted avg': {'precision': 0.9721193362846307, 'recall': 0.9859611231101512, 'f1-score': 0.9789913054916455, 'support': 1852.0}} | 0.0140 | {'f1-score': 0.5435075768863616, 'hamming': 0.01923596112311015} |
| No log | 5.0 | 490 | 0.0650 | {'~LIN': {'precision': 0.9908207343412527, 'recall': 1.0, 'f1-score': 0.9953892053159751, 'support': 1835.0}, 'LIN': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 17.0}, 'accuracy': 0.9908207343412527, 'macro avg': {'precision': 0.49541036717062636, 'recall': 0.5, 'f1-score': 0.49769460265798754, 'support': 1852.0}, 'weighted avg': {'precision': 0.9817257276005392, 'recall': 0.9908207343412527, 'f1-score': 0.9862522633665304, 'support': 1852.0}} | 0.0092 | {'~SI': {'precision': 0.9794816414686826, 'recall': 1.0, 'f1-score': 0.9896344789961812, 'support': 1814.0}, 'SI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 38.0}, 'accuracy': 0.9794816414686826, 'macro avg': {'precision': 0.4897408207343413, 'recall': 0.5, 'f1-score': 0.4948172394980906, 'support': 1852.0}, 'weighted avg': {'precision': 0.9593842859741848, 'recall': 0.9794816414686826, 'f1-score': 0.9693288039411839, 'support': 1852.0}} | 0.0205 | {'~CL': {'precision': 0.978494623655914, 'recall': 0.9802513464991023, 'f1-score': 0.979372197309417, 'support': 1671.0}, 'CL': {'precision': 0.8146067415730337, 'recall': 0.8011049723756906, 'f1-score': 0.807799442896936, 'support': 181.0}, 'accuracy': 0.962742980561555, 'macro avg': {'precision': 0.8965506826144738, 'recall': 0.8906781594373965, 'f1-score': 0.8935858201031766, 'support': 1852.0}, 'weighted avg': {'precision': 0.9624775034307512, 'recall': 0.962742980561555, 'f1-score': 0.9626040177475061, 'support': 1852.0}} | 0.0373 | {'~D': {'precision': 0.9956756756756757, 'recall': 1.0, 'f1-score': 0.9978331527627302, 'support': 1842.0}, 'D': {'precision': 1.0, 'recall': 0.2, 'f1-score': 0.3333333333333333, 'support': 10.0}, 'accuracy': 0.9956803455723542, 'macro avg': {'precision': 0.9978378378378379, 'recall': 0.6, 'f1-score': 0.6655832430480317, 'support': 1852.0}, 'weighted avg': {'precision': 0.9956990251590684, 'recall': 0.9956803455723542, 'f1-score': 0.9942451407787701, 'support': 1852.0}} | 0.0043 | {'~HI': {'precision': 0.9962203023758099, 'recall': 1.0, 'f1-score': 0.9981065728969435, 'support': 1845.0}, 'HI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0}, 'accuracy': 0.9962203023758099, 'macro avg': {'precision': 0.49811015118790497, 'recall': 0.5, 'f1-score': 0.49905328644847174, 'support': 1852.0}, 'weighted avg': {'precision': 0.9924548908657502, 'recall': 0.9962203023758099, 'f1-score': 0.9943340318546764, 'support': 1852.0}} | 0.0038 | {'~PL': {'precision': 0.9729875742841707, 'recall': 0.9994450610432852, 'f1-score': 0.9860388721598686, 'support': 1802.0}, 'PL': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 50.0}, 'accuracy': 0.9724622030237581, 'macro avg': {'precision': 0.48649378714208535, 'recall': 0.4997225305216426, 'f1-score': 0.4930194360799343, 'support': 1852.0}, 'weighted avg': {'precision': 0.9467190112635397, 'recall': 0.9724622030237581, 'f1-score': 0.9594179522851422, 'support': 1852.0}} | 0.0275 | {'~TI': {'precision': 0.9774477447744775, 'recall': 0.9899721448467966, 'f1-score': 0.9836700802657071, 'support': 1795.0}, 'TI': {'precision': 0.47058823529411764, 'recall': 0.2807017543859649, 'f1-score': 0.3516483516483517, 'support': 57.0}, 'accuracy': 0.9681425485961123, 'macro avg': {'precision': 0.7240179900342976, 'recall': 0.6353369496163808, 'f1-score': 0.6676592159570294, 'support': 1852.0}, 'weighted avg': {'precision': 0.9618478570636888, 'recall': 0.9681425485961123, 'f1-score': 0.9642180076246762, 'support': 1852.0}} | 0.0319 | {'~PC': {'precision': 0.9859611231101512, 'recall': 1.0, 'f1-score': 0.9929309407286568, 'support': 1826.0}, 'PC': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 26.0}, 'accuracy': 0.9859611231101512, 'macro avg': {'precision': 0.4929805615550756, 'recall': 0.5, 'f1-score': 0.4964654703643284, 'support': 1852.0}, 'weighted avg': {'precision': 0.9721193362846307, 'recall': 0.9859611231101512, 'f1-score': 0.9789913054916455, 'support': 1852.0}} | 0.0140 | {'f1-score': 0.5884847892696312, 'hamming': 0.0185610151187905} |
| 0.0775 | 6.0 | 588 | 0.0699 | {'~LIN': {'precision': 0.9908207343412527, 'recall': 1.0, 'f1-score': 0.9953892053159751, 'support': 1835.0}, 'LIN': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 17.0}, 'accuracy': 0.9908207343412527, 'macro avg': {'precision': 0.49541036717062636, 'recall': 0.5, 'f1-score': 0.49769460265798754, 'support': 1852.0}, 'weighted avg': {'precision': 0.9817257276005392, 'recall': 0.9908207343412527, 'f1-score': 0.9862522633665304, 'support': 1852.0}} | 0.0092 | {'~SI': {'precision': 0.9794816414686826, 'recall': 1.0, 'f1-score': 0.9896344789961812, 'support': 1814.0}, 'SI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 38.0}, 'accuracy': 0.9794816414686826, 'macro avg': {'precision': 0.4897408207343413, 'recall': 0.5, 'f1-score': 0.4948172394980906, 'support': 1852.0}, 'weighted avg': {'precision': 0.9593842859741848, 'recall': 0.9794816414686826, 'f1-score': 0.9693288039411839, 'support': 1852.0}} | 0.0205 | {'~CL': {'precision': 0.9657375145180023, 'recall': 0.995212447636146, 'f1-score': 0.9802534630120837, 'support': 1671.0}, 'CL': {'precision': 0.9384615384615385, 'recall': 0.6740331491712708, 'f1-score': 0.7845659163987139, 'support': 181.0}, 'accuracy': 0.9638228941684666, 'macro avg': {'precision': 0.9520995264897704, 'recall': 0.8346227984037085, 'f1-score': 0.8824096897053988, 'support': 1852.0}, 'weighted avg': {'precision': 0.9630717738774948, 'recall': 0.9638228941684666, 'f1-score': 0.9611284922037575, 'support': 1852.0}} | 0.0362 | {'~D': {'precision': 0.9989106753812637, 'recall': 0.995656894679696, 'f1-score': 0.9972811310494835, 'support': 1842.0}, 'D': {'precision': 0.5, 'recall': 0.8, 'f1-score': 0.6153846153846154, 'support': 10.0}, 'accuracy': 0.9946004319654428, 'macro avg': {'precision': 0.7494553376906319, 'recall': 0.897828447339848, 'f1-score': 0.8063328732170494, 'support': 1852.0}, 'weighted avg': {'precision': 0.9962167732463756, 'recall': 0.9946004319654428, 'f1-score': 0.9952190548309906, 'support': 1852.0}} | 0.0054 | {'~HI': {'precision': 0.9962203023758099, 'recall': 1.0, 'f1-score': 0.9981065728969435, 'support': 1845.0}, 'HI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0}, 'accuracy': 0.9962203023758099, 'macro avg': {'precision': 0.49811015118790497, 'recall': 0.5, 'f1-score': 0.49905328644847174, 'support': 1852.0}, 'weighted avg': {'precision': 0.9924548908657502, 'recall': 0.9962203023758099, 'f1-score': 0.9943340318546764, 'support': 1852.0}} | 0.0038 | {'~PL': {'precision': 0.9807903402854007, 'recall': 0.9916759156492786, 'f1-score': 0.9862030905077263, 'support': 1802.0}, 'PL': {'precision': 0.5, 'recall': 0.3, 'f1-score': 0.375, 'support': 50.0}, 'accuracy': 0.9730021598272138, 'macro avg': {'precision': 0.7403951701427003, 'recall': 0.6458379578246393, 'f1-score': 0.6806015452538632, 'support': 1852.0}, 'weighted avg': {'precision': 0.9678100395217559, 'recall': 0.9730021598272138, 'f1-score': 0.9697019271570857, 'support': 1852.0}} | 0.0270 | {'~TI': {'precision': 0.9808773903262092, 'recall': 0.9715877437325905, 'f1-score': 0.9762104673943465, 'support': 1795.0}, 'TI': {'precision': 0.3108108108108108, 'recall': 0.40350877192982454, 'f1-score': 0.3511450381679389, 'support': 57.0}, 'accuracy': 0.9541036717062635, 'macro avg': {'precision': 0.64584410056851, 'recall': 0.6875482578312075, 'f1-score': 0.6636777527811427, 'support': 1852.0}, 'weighted avg': {'precision': 0.9602543908486835, 'recall': 0.9541036717062635, 'f1-score': 0.9569724925207476, 'support': 1852.0}} | 0.0459 | {'~PC': {'precision': 0.9859307359307359, 'recall': 0.9978094194961665, 'f1-score': 0.9918345127925966, 'support': 1826.0}, 'PC': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 26.0}, 'accuracy': 0.9838012958963283, 'macro avg': {'precision': 0.49296536796536794, 'recall': 0.49890470974808326, 'f1-score': 0.4959172563962983, 'support': 1852.0}, 'weighted avg': {'precision': 0.9720893757070862, 'recall': 0.9838012958963283, 'f1-score': 0.9779102701723982, 'support': 1852.0}} | 0.0162 | {'f1-score': 0.6275630307447878, 'hamming': 0.020518358531317497} |
| 0.0775 | 7.0 | 686 | 0.0675 | {'~LIN': {'precision': 0.9908207343412527, 'recall': 1.0, 'f1-score': 0.9953892053159751, 'support': 1835.0}, 'LIN': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 17.0}, 'accuracy': 0.9908207343412527, 'macro avg': {'precision': 0.49541036717062636, 'recall': 0.5, 'f1-score': 0.49769460265798754, 'support': 1852.0}, 'weighted avg': {'precision': 0.9817257276005392, 'recall': 0.9908207343412527, 'f1-score': 0.9862522633665304, 'support': 1852.0}} | 0.0092 | {'~SI': {'precision': 0.9799891833423472, 'recall': 0.9988974641675854, 'f1-score': 0.9893529893529893, 'support': 1814.0}, 'SI': {'precision': 0.3333333333333333, 'recall': 0.02631578947368421, 'f1-score': 0.04878048780487805, 'support': 38.0}, 'accuracy': 0.9789416846652268, 'macro avg': {'precision': 0.6566612583378403, 'recall': 0.5126066268206348, 'f1-score': 0.5190667385789337, 'support': 1852.0}, 'weighted avg': {'precision': 0.9667208667654884, 'recall': 0.9789416846652268, 'f1-score': 0.970053985541527, 'support': 1852.0}} | 0.0211 | {'~CL': {'precision': 0.9612940496822646, 'recall': 0.9958108916816277, 'f1-score': 0.9782480893592005, 'support': 1671.0}, 'CL': {'precision': 0.9421487603305785, 'recall': 0.6298342541436464, 'f1-score': 0.7549668874172185, 'support': 181.0}, 'accuracy': 0.9600431965442765, 'macro avg': {'precision': 0.9517214050064216, 'recall': 0.812822572912637, 'f1-score': 0.8666074883882096, 'support': 1852.0}, 'weighted avg': {'precision': 0.9594229387899021, 'recall': 0.9600431965442765, 'f1-score': 0.9564263304221061, 'support': 1852.0}} | 0.0400 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9962203023758099, 'recall': 1.0, 'f1-score': 0.9981065728969435, 'support': 1845.0}, 'HI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0}, 'accuracy': 0.9962203023758099, 'macro avg': {'precision': 0.49811015118790497, 'recall': 0.5, 'f1-score': 0.49905328644847174, 'support': 1852.0}, 'weighted avg': {'precision': 0.9924548908657502, 'recall': 0.9962203023758099, 'f1-score': 0.9943340318546764, 'support': 1852.0}} | 0.0038 | {'~PL': {'precision': 0.985579589572934, 'recall': 0.9861265260821309, 'f1-score': 0.9858529819694868, 'support': 1802.0}, 'PL': {'precision': 0.4897959183673469, 'recall': 0.48, 'f1-score': 0.48484848484848486, 'support': 50.0}, 'accuracy': 0.9724622030237581, 'macro avg': {'precision': 0.7376877539701405, 'recall': 0.7330632630410654, 'f1-score': 0.7353507334089858, 'support': 1852.0}, 'weighted avg': {'precision': 0.9721945012574484, 'recall': 0.9724622030237581, 'f1-score': 0.9723269426303669, 'support': 1852.0}} | 0.0275 | {'~TI': {'precision': 0.9692058346839546, 'recall': 0.9994428969359331, 'f1-score': 0.984092155787164, 'support': 1795.0}, 'TI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 57.0}, 'accuracy': 0.968682505399568, 'macro avg': {'precision': 0.4846029173419773, 'recall': 0.49972144846796657, 'f1-score': 0.492046077893582, 'support': 1852.0}, 'weighted avg': {'precision': 0.9393760654739193, 'recall': 0.968682505399568, 'f1-score': 0.9538042222667168, 'support': 1852.0}} | 0.0313 | {'~PC': {'precision': 0.9859383450513791, 'recall': 0.9983570646221248, 'f1-score': 0.992108843537415, 'support': 1826.0}, 'PC': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 26.0}, 'accuracy': 0.984341252699784, 'macro avg': {'precision': 0.49296917252568956, 'recall': 0.4991785323110624, 'f1-score': 0.4960544217687075, 'support': 1852.0}, 'weighted avg': {'precision': 0.9720968780042215, 'recall': 0.984341252699784, 'f1-score': 0.9781807496216629, 'support': 1852.0}} | 0.0157 | {'f1-score': 0.621701996965612, 'hamming': 0.018898488120950324} |
| 0.0775 | 8.0 | 784 | 0.0839 | {'~LIN': {'precision': 0.9913466738777718, 'recall': 0.9989100817438692, 'f1-score': 0.995114006514658, 'support': 1835.0}, 'LIN': {'precision': 0.3333333333333333, 'recall': 0.058823529411764705, 'f1-score': 0.1, 'support': 17.0}, 'accuracy': 0.990280777537797, 'macro avg': {'precision': 0.6623400036055526, 'recall': 0.528866805577817, 'f1-score': 0.547557003257329, 'support': 1852.0}, 'weighted avg': {'precision': 0.9853065946179146, 'recall': 0.990280777537797, 'f1-score': 0.9868975172539942, 'support': 1852.0}} | 0.0097 | {'~SI': {'precision': 0.9809056192034915, 'recall': 0.9911797133406836, 'f1-score': 0.9860159034823143, 'support': 1814.0}, 'SI': {'precision': 0.15789473684210525, 'recall': 0.07894736842105263, 'f1-score': 0.10526315789473684, 'support': 38.0}, 'accuracy': 0.9724622030237581, 'macro avg': {'precision': 0.5694001780227984, 'recall': 0.535063540880868, 'f1-score': 0.5456395306885256, 'support': 1852.0}, 'weighted avg': {'precision': 0.9640187868440245, 'recall': 0.9724622030237581, 'f1-score': 0.9679443028709062, 'support': 1852.0}} | 0.0275 | {'~CL': {'precision': 0.9635205558772437, 'recall': 0.9958108916816277, 'f1-score': 0.9793996468510888, 'support': 1671.0}, 'CL': {'precision': 0.944, 'recall': 0.6519337016574586, 'f1-score': 0.7712418300653595, 'support': 181.0}, 'accuracy': 0.9622030237580994, 'macro avg': {'precision': 0.9537602779386218, 'recall': 0.8238722966695431, 'f1-score': 0.8753207384582242, 'support': 1852.0}, 'weighted avg': {'precision': 0.9616127693687225, 'recall': 0.9622030237580994, 'f1-score': 0.9590559293358529, 'support': 1852.0}} | 0.0378 | {'~D': {'precision': 0.9989154013015185, 'recall': 1.0, 'f1-score': 0.9994574064026045, 'support': 1842.0}, 'D': {'precision': 1.0, 'recall': 0.8, 'f1-score': 0.8888888888888888, 'support': 10.0}, 'accuracy': 0.9989200863930886, 'macro avg': {'precision': 0.9994577006507592, 'recall': 0.9, 'f1-score': 0.9441731476457467, 'support': 1852.0}, 'weighted avg': {'precision': 0.9989212576659812, 'recall': 0.9989200863930886, 'f1-score': 0.998860384169809, 'support': 1852.0}} | 0.0011 | {'~HI': {'precision': 0.9962203023758099, 'recall': 1.0, 'f1-score': 0.9981065728969435, 'support': 1845.0}, 'HI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0}, 'accuracy': 0.9962203023758099, 'macro avg': {'precision': 0.49811015118790497, 'recall': 0.5, 'f1-score': 0.49905328644847174, 'support': 1852.0}, 'weighted avg': {'precision': 0.9924548908657502, 'recall': 0.9962203023758099, 'f1-score': 0.9943340318546764, 'support': 1852.0}} | 0.0038 | {'~PL': {'precision': 0.9772234273318872, 'recall': 1.0, 'f1-score': 0.988480526604498, 'support': 1802.0}, 'PL': {'precision': 1.0, 'recall': 0.16, 'f1-score': 0.27586206896551724, 'support': 50.0}, 'accuracy': 0.9773218142548596, 'macro avg': {'precision': 0.9886117136659436, 'recall': 0.58, 'f1-score': 0.6321712977850076, 'support': 1852.0}, 'weighted avg': {'precision': 0.9778383456004647, 'recall': 0.9773218142548596, 'f1-score': 0.9692413673809833, 'support': 1852.0}} | 0.0227 | {'~TI': {'precision': 0.978429203539823, 'recall': 0.9855153203342618, 'f1-score': 0.9819594782126007, 'support': 1795.0}, 'TI': {'precision': 0.4090909090909091, 'recall': 0.3157894736842105, 'f1-score': 0.3564356435643564, 'support': 57.0}, 'accuracy': 0.964902807775378, 'macro avg': {'precision': 0.6937600563153661, 'recall': 0.6506523970092362, 'f1-score': 0.6691975608884786, 'support': 1852.0}, 'weighted avg': {'precision': 0.9609063726631555, 'recall': 0.964902807775378, 'f1-score': 0.962707394748805, 'support': 1852.0}} | 0.0351 | {'~PC': {'precision': 0.9859611231101512, 'recall': 1.0, 'f1-score': 0.9929309407286568, 'support': 1826.0}, 'PC': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 26.0}, 'accuracy': 0.9859611231101512, 'macro avg': {'precision': 0.4929805615550756, 'recall': 0.5, 'f1-score': 0.4964654703643284, 'support': 1852.0}, 'weighted avg': {'precision': 0.9721193362846307, 'recall': 0.9859611231101512, 'f1-score': 0.9789913054916455, 'support': 1852.0}} | 0.0140 | {'f1-score': 0.6511972544420139, 'hamming': 0.01896598272138229} |
| 0.0775 | 9.0 | 882 | 0.0748 | {'~LIN': {'precision': 0.990815775256618, 'recall': 0.9994550408719346, 'f1-score': 0.99511665762344, 'support': 1835.0}, 'LIN': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 17.0}, 'accuracy': 0.990280777537797, 'macro avg': {'precision': 0.495407887628309, 'recall': 0.4997275204359673, 'f1-score': 0.49755832881172, 'support': 1852.0}, 'weighted avg': {'precision': 0.9817208140366599, 'recall': 0.990280777537797, 'f1-score': 0.9859822174616698, 'support': 1852.0}} | 0.0097 | {'~SI': {'precision': 0.9830878341516639, 'recall': 0.9933847850055126, 'f1-score': 0.9882094872497944, 'support': 1814.0}, 'SI': {'precision': 0.3684210526315789, 'recall': 0.18421052631578946, 'f1-score': 0.24561403508771928, 'support': 38.0}, 'accuracy': 0.9767818574514039, 'macro avg': {'precision': 0.6757544433916214, 'recall': 0.588797655660651, 'f1-score': 0.6169117611687569, 'support': 1852.0}, 'weighted avg': {'precision': 0.9704758807511438, 'recall': 0.9767818574514039, 'f1-score': 0.9729726475186071, 'support': 1852.0}} | 0.0232 | {'~CL': {'precision': 0.9809069212410502, 'recall': 0.9838420107719928, 'f1-score': 0.9823722736779206, 'support': 1671.0}, 'CL': {'precision': 0.8465909090909091, 'recall': 0.8232044198895028, 'f1-score': 0.834733893557423, 'support': 181.0}, 'accuracy': 0.9681425485961123, 'macro avg': {'precision': 0.9137489151659797, 'recall': 0.9035232153307478, 'f1-score': 0.9085530836176718, 'support': 1852.0}, 'weighted avg': {'precision': 0.9677799243732449, 'recall': 0.9681425485961123, 'f1-score': 0.9679432527266192, 'support': 1852.0}} | 0.0319 | {'~D': {'precision': 0.9983739837398374, 'recall': 1.0, 'f1-score': 0.9991863303498779, 'support': 1842.0}, 'D': {'precision': 1.0, 'recall': 0.7, 'f1-score': 0.8235294117647058, 'support': 10.0}, 'accuracy': 0.9983801295896328, 'macro avg': {'precision': 0.9991869918699188, 'recall': 0.85, 'f1-score': 0.9113578710572918, 'support': 1852.0}, 'weighted avg': {'precision': 0.9983827635252595, 'recall': 0.9983801295896328, 'f1-score': 0.9982378588672367, 'support': 1852.0}} | 0.0016 | {'~HI': {'precision': 0.9962203023758099, 'recall': 1.0, 'f1-score': 0.9981065728969435, 'support': 1845.0}, 'HI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0}, 'accuracy': 0.9962203023758099, 'macro avg': {'precision': 0.49811015118790497, 'recall': 0.5, 'f1-score': 0.49905328644847174, 'support': 1852.0}, 'weighted avg': {'precision': 0.9924548908657502, 'recall': 0.9962203023758099, 'f1-score': 0.9943340318546764, 'support': 1852.0}} | 0.0038 | {'~PL': {'precision': 0.9878587196467992, 'recall': 0.9933407325194229, 'f1-score': 0.9905921416712784, 'support': 1802.0}, 'PL': {'precision': 0.7, 'recall': 0.56, 'f1-score': 0.6222222222222222, 'support': 50.0}, 'accuracy': 0.9816414686825053, 'macro avg': {'precision': 0.8439293598233996, 'recall': 0.7766703662597114, 'f1-score': 0.8064071819467503, 'support': 1852.0}, 'weighted avg': {'precision': 0.980087155941432, 'recall': 0.9816414686825053, 'f1-score': 0.9806469494615306, 'support': 1852.0}} | 0.0184 | {'~TI': {'precision': 0.971211298207496, 'recall': 0.996100278551532, 'f1-score': 0.9834983498349835, 'support': 1795.0}, 'TI': {'precision': 0.36363636363636365, 'recall': 0.07017543859649122, 'f1-score': 0.11764705882352941, 'support': 57.0}, 'accuracy': 0.9676025917926566, 'macro avg': {'precision': 0.6674238309219298, 'recall': 0.5331378585740116, 'f1-score': 0.5505727043292564, 'support': 1852.0}, 'weighted avg': {'precision': 0.9525116376942375, 'recall': 0.9676025917926566, 'f1-score': 0.9568495789993178, 'support': 1852.0}} | 0.0324 | {'~PC': {'precision': 0.9859535386277688, 'recall': 0.9994523548740416, 'f1-score': 0.9926570573837368, 'support': 1826.0}, 'PC': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 26.0}, 'accuracy': 0.9854211663066955, 'macro avg': {'precision': 0.4929767693138844, 'recall': 0.4997261774370208, 'f1-score': 0.4963285286918684, 'support': 1852.0}, 'weighted avg': {'precision': 0.9721118582798628, 'recall': 0.9854211663066955, 'f1-score': 0.9787212671612869, 'support': 1852.0}} | 0.0146 | {'f1-score': 0.6608428432589734, 'hamming': 0.016941144708423325} |
| 0.0775 | 10.0 | 980 | 0.0962 | {'~LIN': {'precision': 0.9913560237709347, 'recall': 1.0, 'f1-score': 0.9956592512208355, 'support': 1835.0}, 'LIN': {'precision': 1.0, 'recall': 0.058823529411764705, 'f1-score': 0.1111111111111111, 'support': 17.0}, 'accuracy': 0.9913606911447084, 'macro avg': {'precision': 0.9956780118854673, 'recall': 0.5294117647058824, 'f1-score': 0.5533851811659733, 'support': 1852.0}, 'weighted avg': {'precision': 0.9914353691250891, 'recall': 0.9913606911447084, 'f1-score': 0.9875397488548175, 'support': 1852.0}} | 0.0086 | {'~SI': {'precision': 0.983032293377121, 'recall': 0.9900771775082691, 'f1-score': 0.9865421587475968, 'support': 1814.0}, 'SI': {'precision': 0.28, 'recall': 0.18421052631578946, 'f1-score': 0.2222222222222222, 'support': 38.0}, 'accuracy': 0.9735421166306696, 'macro avg': {'precision': 0.6315161466885605, 'recall': 0.5871438519120292, 'f1-score': 0.6043821904849095, 'support': 1852.0}, 'weighted avg': {'precision': 0.9686072247225148, 'recall': 0.9735421166306696, 'f1-score': 0.9708595682573352, 'support': 1852.0}} | 0.0265 | {'~CL': {'precision': 0.9724663151728178, 'recall': 0.9934171154997008, 'f1-score': 0.9828300769686205, 'support': 1671.0}, 'CL': {'precision': 0.9241379310344827, 'recall': 0.7403314917127072, 'f1-score': 0.8220858895705522, 'support': 181.0}, 'accuracy': 0.968682505399568, 'macro avg': {'precision': 0.9483021231036503, 'recall': 0.866874303606204, 'f1-score': 0.9024579832695863, 'support': 1852.0}, 'weighted avg': {'precision': 0.9677430767662095, 'recall': 0.968682505399568, 'f1-score': 0.9671201968827401, 'support': 1852.0}} | 0.0313 | {'~D': {'precision': 0.9989154013015185, 'recall': 1.0, 'f1-score': 0.9994574064026045, 'support': 1842.0}, 'D': {'precision': 1.0, 'recall': 0.8, 'f1-score': 0.8888888888888888, 'support': 10.0}, 'accuracy': 0.9989200863930886, 'macro avg': {'precision': 0.9994577006507592, 'recall': 0.9, 'f1-score': 0.9441731476457467, 'support': 1852.0}, 'weighted avg': {'precision': 0.9989212576659812, 'recall': 0.9989200863930886, 'f1-score': 0.998860384169809, 'support': 1852.0}} | 0.0011 | {'~HI': {'precision': 0.9967567567567568, 'recall': 0.9994579945799458, 'f1-score': 0.9981055480378891, 'support': 1845.0}, 'HI': {'precision': 0.5, 'recall': 0.14285714285714285, 'f1-score': 0.2222222222222222, 'support': 7.0}, 'accuracy': 0.9962203023758099, 'macro avg': {'precision': 0.7483783783783784, 'recall': 0.5711575687185443, 'f1-score': 0.6101638851300557, 'support': 1852.0}, 'weighted avg': {'precision': 0.9948791664234429, 'recall': 0.9962203023758099, 'f1-score': 0.9951729436746549, 'support': 1852.0}} | 0.0038 | {'~PL': {'precision': 0.9820065430752454, 'recall': 0.9994450610432852, 'f1-score': 0.9906490649064906, 'support': 1802.0}, 'PL': {'precision': 0.9444444444444444, 'recall': 0.34, 'f1-score': 0.5, 'support': 50.0}, 'accuracy': 0.9816414686825053, 'macro avg': {'precision': 0.9632254937598449, 'recall': 0.6697225305216427, 'f1-score': 0.7453245324532454, 'support': 1852.0}, 'weighted avg': {'precision': 0.9809924475398566, 'recall': 0.9816414686825053, 'f1-score': 0.9774025998712182, 'support': 1852.0}} | 0.0184 | {'~TI': {'precision': 0.9747530186608123, 'recall': 0.9894150417827298, 'f1-score': 0.9820293060547415, 'support': 1795.0}, 'TI': {'precision': 0.36666666666666664, 'recall': 0.19298245614035087, 'f1-score': 0.25287356321839083, 'support': 57.0}, 'accuracy': 0.964902807775378, 'macro avg': {'precision': 0.6707098426637395, 'recall': 0.5911987489615403, 'f1-score': 0.6174514346365662, 'support': 1852.0}, 'weighted avg': {'precision': 0.9560376179784872, 'recall': 0.964902807775378, 'f1-score': 0.9595876876197135, 'support': 1852.0}} | 0.0351 | {'~PC': {'precision': 0.9864864864864865, 'recall': 0.9994523548740416, 'f1-score': 0.9929270946681176, 'support': 1826.0}, 'PC': {'precision': 0.5, 'recall': 0.038461538461538464, 'f1-score': 0.07142857142857142, 'support': 26.0}, 'accuracy': 0.9859611231101512, 'macro avg': {'precision': 0.7432432432432432, 'recall': 0.51895694666779, 'f1-score': 0.5321778330483445, 'support': 1852.0}, 'weighted avg': {'precision': 0.9796567625941276, 'recall': 0.9859611231101512, 'f1-score': 0.97999029034618, 'support': 1852.0}} | 0.0140 | {'f1-score': 0.6886895234793033, 'hamming': 0.01734611231101512} |
| 0.0358 | 11.0 | 1078 | 0.0811 | {'~LIN': {'precision': 0.9913279132791328, 'recall': 0.9967302452316076, 'f1-score': 0.9940217391304348, 'support': 1835.0}, 'LIN': {'precision': 0.14285714285714285, 'recall': 0.058823529411764705, 'f1-score': 0.08333333333333333, 'support': 17.0}, 'accuracy': 0.9881209503239741, 'macro avg': {'precision': 0.5670925280681378, 'recall': 0.5277768873216862, 'f1-score': 0.5386775362318841, 'support': 1852.0}, 'weighted avg': {'precision': 0.9835395746737473, 'recall': 0.9881209503239741, 'f1-score': 0.9856622883212821, 'support': 1852.0}} | 0.0119 | {'~SI': {'precision': 0.9834801762114538, 'recall': 0.9845644983461963, 'f1-score': 0.9840220385674932, 'support': 1814.0}, 'SI': {'precision': 0.2222222222222222, 'recall': 0.21052631578947367, 'f1-score': 0.21621621621621623, 'support': 38.0}, 'accuracy': 0.968682505399568, 'macro avg': {'precision': 0.6028511992168379, 'recall': 0.597545407067835, 'f1-score': 0.6001191273918547, 'support': 1852.0}, 'weighted avg': {'precision': 0.9678604125766855, 'recall': 0.968682505399568, 'f1-score': 0.9682679234220566, 'support': 1852.0}} | 0.0313 | {'~CL': {'precision': 0.9815256257449344, 'recall': 0.9856373429084381, 'f1-score': 0.9835771872200657, 'support': 1671.0}, 'CL': {'precision': 0.8620689655172413, 'recall': 0.8287292817679558, 'f1-score': 0.8450704225352113, 'support': 181.0}, 'accuracy': 0.9703023758099352, 'macro avg': {'precision': 0.9217972956310878, 'recall': 0.9071833123381969, 'f1-score': 0.9143238048776385, 'support': 1852.0}, 'weighted avg': {'precision': 0.9698508657550788, 'recall': 0.9703023758099352, 'f1-score': 0.9700406189652285, 'support': 1852.0}} | 0.0297 | {'~D': {'precision': 0.9989136338946225, 'recall': 0.998371335504886, 'f1-score': 0.9986424110779256, 'support': 1842.0}, 'D': {'precision': 0.7272727272727273, 'recall': 0.8, 'f1-score': 0.7619047619047619, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8630931805836749, 'recall': 0.899185667752443, 'f1-score': 0.8802735864913438, 'support': 1852.0}, 'weighted avg': {'precision': 0.9974468903383488, 'recall': 0.9973002159827213, 'f1-score': 0.997364130034874, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.996742671009772, 'recall': 0.9951219512195122, 'f1-score': 0.9959316517493898, 'support': 1845.0}, 'HI': {'precision': 0.1, 'recall': 0.14285714285714285, 'f1-score': 0.11764705882352941, 'support': 7.0}, 'accuracy': 0.9919006479481641, 'macro avg': {'precision': 0.548371335504886, 'recall': 0.5689895470383275, 'f1-score': 0.5567893552864596, 'support': 1852.0}, 'weighted avg': {'precision': 0.9933532548666465, 'recall': 0.9919006479481641, 'f1-score': 0.9926120015601452, 'support': 1852.0}} | 0.0081 | {'~PL': {'precision': 0.9878048780487805, 'recall': 0.9889012208657048, 'f1-score': 0.9883527454242929, 'support': 1802.0}, 'PL': {'precision': 0.5833333333333334, 'recall': 0.56, 'f1-score': 0.5714285714285714, 'support': 50.0}, 'accuracy': 0.9773218142548596, 'macro avg': {'precision': 0.785569105691057, 'recall': 0.7744506104328523, 'f1-score': 0.7798906584264321, 'support': 1852.0}, 'weighted avg': {'precision': 0.9768850199301129, 'recall': 0.9773218142548596, 'f1-score': 0.9770966932105856, 'support': 1852.0}} | 0.0227 | {'~TI': {'precision': 0.9820022497187851, 'recall': 0.9727019498607242, 'f1-score': 0.9773299748110831, 'support': 1795.0}, 'TI': {'precision': 0.33783783783783783, 'recall': 0.43859649122807015, 'f1-score': 0.3816793893129771, 'support': 57.0}, 'accuracy': 0.9562634989200864, 'macro avg': {'precision': 0.6599200437783115, 'recall': 0.7056492205443972, 'f1-score': 0.6795046820620301, 'support': 1852.0}, 'weighted avg': {'precision': 0.9621764551846523, 'recall': 0.9562634989200864, 'f1-score': 0.958997316402124, 'support': 1852.0}} | 0.0437 | {'~PC': {'precision': 0.9878520154610713, 'recall': 0.97973713033954, 'f1-score': 0.9837778388781964, 'support': 1826.0}, 'PC': {'precision': 0.0975609756097561, 'recall': 0.15384615384615385, 'f1-score': 0.11940298507462686, 'support': 26.0}, 'accuracy': 0.9681425485961123, 'macro avg': {'precision': 0.5427064955354137, 'recall': 0.5667916420928469, 'f1-score': 0.5515904119764116, 'support': 1852.0}, 'weighted avg': {'precision': 0.9753533291564632, 'recall': 0.9681425485961123, 'f1-score': 0.9716429867189671, 'support': 1852.0}} | 0.0319 | {'f1-score': 0.6876461453430068, 'hamming': 0.022745680345572353} |
| 0.0358 | 12.0 | 1176 | 0.0952 | {'~LIN': {'precision': 0.9917943107221007, 'recall': 0.9880108991825614, 'f1-score': 0.98989898989899, 'support': 1835.0}, 'LIN': {'precision': 0.08333333333333333, 'recall': 0.11764705882352941, 'f1-score': 0.0975609756097561, 'support': 17.0}, 'accuracy': 0.9800215982721382, 'macro avg': {'precision': 0.537563822027717, 'recall': 0.5528289790030454, 'f1-score': 0.543729982754373, 'support': 1852.0}, 'weighted avg': {'precision': 0.9834553060700439, 'recall': 0.9800215982721382, 'f1-score': 0.9817079822084301, 'support': 1852.0}} | 0.0200 | {'~SI': {'precision': 0.9860646599777034, 'recall': 0.9751929437706726, 'f1-score': 0.9805986696230599, 'support': 1814.0}, 'SI': {'precision': 0.22413793103448276, 'recall': 0.34210526315789475, 'f1-score': 0.2708333333333333, 'support': 38.0}, 'accuracy': 0.9622030237580994, 'macro avg': {'precision': 0.605101295506093, 'recall': 0.6586491034642836, 'f1-score': 0.6257160014781966, 'support': 1852.0}, 'weighted avg': {'precision': 0.9704311741786524, 'recall': 0.9622030237580994, 'f1-score': 0.9660354499799662, 'support': 1852.0}} | 0.0378 | {'~CL': {'precision': 0.9646786334684424, 'recall': 0.9970077797725913, 'f1-score': 0.9805768098881695, 'support': 1671.0}, 'CL': {'precision': 0.96, 'recall': 0.6629834254143646, 'f1-score': 0.7843137254901961, 'support': 181.0}, 'accuracy': 0.9643628509719222, 'macro avg': {'precision': 0.9623393167342211, 'recall': 0.8299956025934779, 'f1-score': 0.8824452676891827, 'support': 1852.0}, 'weighted avg': {'precision': 0.9642213804134813, 'recall': 0.9643628509719222, 'f1-score': 0.9613955905166613, 'support': 1852.0}} | 0.0356 | {'~D': {'precision': 0.9972914409534128, 'recall': 0.999457111834962, 'f1-score': 0.9983731019522777, 'support': 1842.0}, 'D': {'precision': 0.8333333333333334, 'recall': 0.5, 'f1-score': 0.625, 'support': 10.0}, 'accuracy': 0.9967602591792657, 'macro avg': {'precision': 0.915312387143373, 'recall': 0.749728555917481, 'f1-score': 0.8116865509761388, 'support': 1852.0}, 'weighted avg': {'precision': 0.9964061379965009, 'recall': 0.9967602591792657, 'f1-score': 0.9963570484860127, 'support': 1852.0}} | 0.0032 | {'~HI': {'precision': 0.9961894393032118, 'recall': 0.991869918699187, 'f1-score': 0.9940249864204237, 'support': 1845.0}, 'HI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0}, 'accuracy': 0.9881209503239741, 'macro avg': {'precision': 0.4980947196516059, 'recall': 0.4959349593495935, 'f1-score': 0.49701249321021185, 'support': 1852.0}, 'weighted avg': {'precision': 0.9924241444462342, 'recall': 0.9881209503239741, 'f1-score': 0.9902678725408648, 'support': 1852.0}} | 0.0119 | {'~PL': {'precision': 0.9846069268829027, 'recall': 0.9938956714761377, 'f1-score': 0.9892294946147473, 'support': 1802.0}, 'PL': {'precision': 0.6666666666666666, 'recall': 0.44, 'f1-score': 0.5301204819277109, 'support': 50.0}, 'accuracy': 0.9789416846652268, 'macro avg': {'precision': 0.8256367967747846, 'recall': 0.7169478357380689, 'f1-score': 0.759674988271229, 'support': 1852.0}, 'weighted avg': {'precision': 0.9760232265530907, 'recall': 0.9789416846652268, 'f1-score': 0.976834542868337, 'support': 1852.0}} | 0.0211 | {'~TI': {'precision': 0.9811215991116047, 'recall': 0.9844011142061281, 'f1-score': 0.9827586206896551, 'support': 1795.0}, 'TI': {'precision': 0.45098039215686275, 'recall': 0.40350877192982454, 'f1-score': 0.42592592592592593, 'support': 57.0}, 'accuracy': 0.9665226781857451, 'macro avg': {'precision': 0.7160509956342337, 'recall': 0.6939549430679763, 'f1-score': 0.7043422733077905, 'support': 1852.0}, 'weighted avg': {'precision': 0.9648051580768205, 'recall': 0.9665226781857451, 'f1-score': 0.9656206813799724, 'support': 1852.0}} | 0.0335 | {'~PC': {'precision': 0.9864645370871684, 'recall': 0.9978094194961665, 'f1-score': 0.9921045466920774, 'support': 1826.0}, 'PC': {'precision': 0.2, 'recall': 0.038461538461538464, 'f1-score': 0.06451612903225806, 'support': 26.0}, 'accuracy': 0.984341252699784, 'macro avg': {'precision': 0.5932322685435842, 'recall': 0.5181354789788525, 'f1-score': 0.5283103378621677, 'support': 1852.0}, 'weighted avg': {'precision': 0.9754234582727697, 'recall': 0.984341252699784, 'f1-score': 0.9790822470921016, 'support': 1852.0}} | 0.0157 | {'f1-score': 0.6691147369436613, 'hamming': 0.02234071274298056} |
| 0.0358 | 13.0 | 1274 | 0.0885 | {'~LIN': {'precision': 0.9913560237709347, 'recall': 1.0, 'f1-score': 0.9956592512208355, 'support': 1835.0}, 'LIN': {'precision': 1.0, 'recall': 0.058823529411764705, 'f1-score': 0.1111111111111111, 'support': 17.0}, 'accuracy': 0.9913606911447084, 'macro avg': {'precision': 0.9956780118854673, 'recall': 0.5294117647058824, 'f1-score': 0.5533851811659733, 'support': 1852.0}, 'weighted avg': {'precision': 0.9914353691250891, 'recall': 0.9913606911447084, 'f1-score': 0.9875397488548175, 'support': 1852.0}} | 0.0086 | {'~SI': {'precision': 0.983633387888707, 'recall': 0.9939360529217199, 'f1-score': 0.9887578831916644, 'support': 1814.0}, 'SI': {'precision': 0.42105263157894735, 'recall': 0.21052631578947367, 'f1-score': 0.2807017543859649, 'support': 38.0}, 'accuracy': 0.9778617710583153, 'macro avg': {'precision': 0.7023430097338272, 'recall': 0.6022311843555967, 'f1-score': 0.6347298187888146, 'support': 1852.0}, 'weighted avg': {'precision': 0.9720901542279237, 'recall': 0.9778617710583153, 'f1-score': 0.9742297336805323, 'support': 1852.0}} | 0.0221 | {'~CL': {'precision': 0.9713114754098361, 'recall': 0.992818671454219, 'f1-score': 0.9819473216928085, 'support': 1671.0}, 'CL': {'precision': 0.9166666666666666, 'recall': 0.7292817679558011, 'f1-score': 0.8123076923076923, 'support': 181.0}, 'accuracy': 0.9670626349892009, 'macro avg': {'precision': 0.9439890710382514, 'recall': 0.8610502197050101, 'f1-score': 0.8971275070002505, 'support': 1852.0}, 'weighted avg': {'precision': 0.9659709190477876, 'recall': 0.9670626349892009, 'f1-score': 0.9653680706567902, 'support': 1852.0}} | 0.0329 | {'~D': {'precision': 0.9983731019522777, 'recall': 0.999457111834962, 'f1-score': 0.9989148128052089, 'support': 1842.0}, 'D': {'precision': 0.875, 'recall': 0.7, 'f1-score': 0.7777777777777778, 'support': 10.0}, 'accuracy': 0.9978401727861771, 'macro avg': {'precision': 0.9366865509761388, 'recall': 0.849728555917481, 'f1-score': 0.8883462952914933, 'support': 1852.0}, 'weighted avg': {'precision': 0.997706940494652, 'recall': 0.9978401727861771, 'f1-score': 0.9977207683396181, 'support': 1852.0}} | 0.0022 | {'~HI': {'precision': 0.9962203023758099, 'recall': 1.0, 'f1-score': 0.9981065728969435, 'support': 1845.0}, 'HI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0}, 'accuracy': 0.9962203023758099, 'macro avg': {'precision': 0.49811015118790497, 'recall': 0.5, 'f1-score': 0.49905328644847174, 'support': 1852.0}, 'weighted avg': {'precision': 0.9924548908657502, 'recall': 0.9962203023758099, 'f1-score': 0.9943340318546764, 'support': 1852.0}} | 0.0038 | {'~PL': {'precision': 0.9873278236914601, 'recall': 0.9944506104328524, 'f1-score': 0.9908764169200995, 'support': 1802.0}, 'PL': {'precision': 0.7297297297297297, 'recall': 0.54, 'f1-score': 0.6206896551724138, 'support': 50.0}, 'accuracy': 0.9821814254859611, 'macro avg': {'precision': 0.8585287767105949, 'recall': 0.7672253052164262, 'f1-score': 0.8057830360462567, 'support': 1852.0}, 'weighted avg': {'precision': 0.980373231521867, 'recall': 0.9821814254859611, 'f1-score': 0.9808821738923543, 'support': 1852.0}} | 0.0178 | {'~TI': {'precision': 0.977728285077951, 'recall': 0.9782729805013928, 'f1-score': 0.9780005569479253, 'support': 1795.0}, 'TI': {'precision': 0.30357142857142855, 'recall': 0.2982456140350877, 'f1-score': 0.3008849557522124, 'support': 57.0}, 'accuracy': 0.9573434125269978, 'macro avg': {'precision': 0.6406498568246898, 'recall': 0.6382592972682403, 'f1-score': 0.6394427563500689, 'support': 1852.0}, 'weighted avg': {'precision': 0.9569793969457308, 'recall': 0.9573434125269978, 'f1-score': 0.9571606059392019, 'support': 1852.0}} | 0.0427 | {'~PC': {'precision': 0.9864645370871684, 'recall': 0.9978094194961665, 'f1-score': 0.9921045466920774, 'support': 1826.0}, 'PC': {'precision': 0.2, 'recall': 0.038461538461538464, 'f1-score': 0.06451612903225806, 'support': 26.0}, 'accuracy': 0.984341252699784, 'macro avg': {'precision': 0.5932322685435842, 'recall': 0.5181354789788525, 'f1-score': 0.5283103378621677, 'support': 1852.0}, 'weighted avg': {'precision': 0.9754234582727697, 'recall': 0.984341252699784, 'f1-score': 0.9790822470921016, 'support': 1852.0}} | 0.0157 | {'f1-score': 0.6807722773691871, 'hamming': 0.01822354211663067} |
| 0.0358 | 14.0 | 1372 | 0.1276 | {'~LIN': {'precision': 0.9908207343412527, 'recall': 1.0, 'f1-score': 0.9953892053159751, 'support': 1835.0}, 'LIN': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 17.0}, 'accuracy': 0.9908207343412527, 'macro avg': {'precision': 0.49541036717062636, 'recall': 0.5, 'f1-score': 0.49769460265798754, 'support': 1852.0}, 'weighted avg': {'precision': 0.9817257276005392, 'recall': 0.9908207343412527, 'f1-score': 0.9862522633665304, 'support': 1852.0}} | 0.0092 | {'~SI': {'precision': 0.980498374864572, 'recall': 0.9977949283351709, 'f1-score': 0.9890710382513661, 'support': 1814.0}, 'SI': {'precision': 0.3333333333333333, 'recall': 0.05263157894736842, 'f1-score': 0.09090909090909091, 'support': 38.0}, 'accuracy': 0.978401727861771, 'macro avg': {'precision': 0.6569158540989527, 'recall': 0.5252132536412697, 'f1-score': 0.5399900645802285, 'support': 1852.0}, 'weighted avg': {'precision': 0.9672196105134991, 'recall': 0.978401727861771, 'f1-score': 0.970642229396611, 'support': 1852.0}} | 0.0216 | {'~CL': {'precision': 0.9651972157772621, 'recall': 0.9958108916816277, 'f1-score': 0.9802650957290132, 'support': 1671.0}, 'CL': {'precision': 0.9453125, 'recall': 0.6685082872928176, 'f1-score': 0.7831715210355987, 'support': 181.0}, 'accuracy': 0.9638228941684666, 'macro avg': {'precision': 0.9552548578886311, 'recall': 0.8321595894872227, 'f1-score': 0.8817183083823059, 'support': 1852.0}, 'weighted avg': {'precision': 0.9632538391273245, 'recall': 0.9638228941684666, 'f1-score': 0.9610027107292789, 'support': 1852.0}} | 0.0362 | {'~D': {'precision': 0.9983731019522777, 'recall': 0.999457111834962, 'f1-score': 0.9989148128052089, 'support': 1842.0}, 'D': {'precision': 0.875, 'recall': 0.7, 'f1-score': 0.7777777777777778, 'support': 10.0}, 'accuracy': 0.9978401727861771, 'macro avg': {'precision': 0.9366865509761388, 'recall': 0.849728555917481, 'f1-score': 0.8883462952914933, 'support': 1852.0}, 'weighted avg': {'precision': 0.997706940494652, 'recall': 0.9978401727861771, 'f1-score': 0.9977207683396181, 'support': 1852.0}} | 0.0022 | {'~HI': {'precision': 0.9967479674796748, 'recall': 0.9967479674796748, 'f1-score': 0.9967479674796748, 'support': 1845.0}, 'HI': {'precision': 0.14285714285714285, 'recall': 0.14285714285714285, 'f1-score': 0.14285714285714285, 'support': 7.0}, 'accuracy': 0.9935205183585313, 'macro avg': {'precision': 0.5698025551684088, 'recall': 0.5698025551684088, 'f1-score': 0.5698025551684088, 'support': 1852.0}, 'weighted avg': {'precision': 0.9935205183585313, 'recall': 0.9935205183585313, 'f1-score': 0.9935205183585313, 'support': 1852.0}} | 0.0065 | {'~PL': {'precision': 0.983013698630137, 'recall': 0.9955604883462819, 'f1-score': 0.989247311827957, 'support': 1802.0}, 'PL': {'precision': 0.7037037037037037, 'recall': 0.38, 'f1-score': 0.4935064935064935, 'support': 50.0}, 'accuracy': 0.9789416846652268, 'macro avg': {'precision': 0.8433587011669204, 'recall': 0.6877802441731409, 'f1-score': 0.7413769026672252, 'support': 1852.0}, 'weighted avg': {'precision': 0.9754729320284515, 'recall': 0.9789416846652268, 'f1-score': 0.975863380447788, 'support': 1852.0}} | 0.0211 | {'~TI': {'precision': 0.9758904109589042, 'recall': 0.9922005571030641, 'f1-score': 0.9839779005524862, 'support': 1795.0}, 'TI': {'precision': 0.48148148148148145, 'recall': 0.22807017543859648, 'f1-score': 0.30952380952380953, 'support': 57.0}, 'accuracy': 0.968682505399568, 'macro avg': {'precision': 0.7286859462201928, 'recall': 0.6101353662708303, 'f1-score': 0.6467508550381479, 'support': 1852.0}, 'weighted avg': {'precision': 0.9606737214447502, 'recall': 0.968682505399568, 'f1-score': 0.9632198642735259, 'support': 1852.0}} | 0.0313 | {'~PC': {'precision': 0.9869636067354699, 'recall': 0.9950711938663745, 'f1-score': 0.9910008181074448, 'support': 1826.0}, 'PC': {'precision': 0.18181818181818182, 'recall': 0.07692307692307693, 'f1-score': 0.10810810810810811, 'support': 26.0}, 'accuracy': 0.9821814254859611, 'macro avg': {'precision': 0.5843908942768259, 'recall': 0.5359971353947257, 'f1-score': 0.5495544631077764, 'support': 1852.0}, 'weighted avg': {'precision': 0.9756602692366311, 'recall': 0.9821814254859611, 'f1-score': 0.9786059960448191, 'support': 1852.0}} | 0.0178 | {'f1-score': 0.6644042558616967, 'hamming': 0.018223542116630666} |
| 0.0358 | 15.0 | 1470 | 0.1154 | {'~LIN': {'precision': 0.9918831168831169, 'recall': 0.9989100817438692, 'f1-score': 0.995384197664947, 'support': 1835.0}, 'LIN': {'precision': 0.5, 'recall': 0.11764705882352941, 'f1-score': 0.19047619047619047, 'support': 17.0}, 'accuracy': 0.9908207343412527, 'macro avg': {'precision': 0.7459415584415585, 'recall': 0.5582785702836993, 'f1-score': 0.5929301940705687, 'support': 1852.0}, 'weighted avg': {'precision': 0.9873679910801942, 'recall': 0.9908207343412527, 'f1-score': 0.9879957332361085, 'support': 1852.0}} | 0.0092 | {'~SI': {'precision': 0.9856194690265486, 'recall': 0.9823594266813671, 'f1-score': 0.9839867476532302, 'support': 1814.0}, 'SI': {'precision': 0.2727272727272727, 'recall': 0.3157894736842105, 'f1-score': 0.2926829268292683, 'support': 38.0}, 'accuracy': 0.968682505399568, 'macro avg': {'precision': 0.6291733708769107, 'recall': 0.6490744501827888, 'f1-score': 0.6383348372412493, 'support': 1852.0}, 'weighted avg': {'precision': 0.9709920913487017, 'recall': 0.968682505399568, 'f1-score': 0.9698023280034944, 'support': 1852.0}} | 0.0313 | {'~CL': {'precision': 0.9792038027332145, 'recall': 0.9862357869539198, 'f1-score': 0.9827072152653548, 'support': 1671.0}, 'CL': {'precision': 0.863905325443787, 'recall': 0.8066298342541437, 'f1-score': 0.8342857142857143, 'support': 181.0}, 'accuracy': 0.968682505399568, 'macro avg': {'precision': 0.9215545640885008, 'recall': 0.8964328106040318, 'f1-score': 0.9084964647755345, 'support': 1852.0}, 'weighted avg': {'precision': 0.967935431032682, 'recall': 0.968682505399568, 'f1-score': 0.9682016582041696, 'support': 1852.0}} | 0.0313 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9967514889009204, 'recall': 0.9978319783197832, 'f1-score': 0.9972914409534128, 'support': 1845.0}, 'HI': {'precision': 0.2, 'recall': 0.14285714285714285, 'f1-score': 0.16666666666666666, 'support': 7.0}, 'accuracy': 0.9946004319654428, 'macro avg': {'precision': 0.5983757444504602, 'recall': 0.5703445605884631, 'f1-score': 0.5819790538100398, 'support': 1852.0}, 'weighted avg': {'precision': 0.9937400091912517, 'recall': 0.9946004319654428, 'f1-score': 0.9941519304674478, 'support': 1852.0}} | 0.0054 | {'~PL': {'precision': 0.9846069268829027, 'recall': 0.9938956714761377, 'f1-score': 0.9892294946147473, 'support': 1802.0}, 'PL': {'precision': 0.6666666666666666, 'recall': 0.44, 'f1-score': 0.5301204819277109, 'support': 50.0}, 'accuracy': 0.9789416846652268, 'macro avg': {'precision': 0.8256367967747846, 'recall': 0.7169478357380689, 'f1-score': 0.759674988271229, 'support': 1852.0}, 'weighted avg': {'precision': 0.9760232265530907, 'recall': 0.9789416846652268, 'f1-score': 0.976834542868337, 'support': 1852.0}} | 0.0211 | {'~TI': {'precision': 0.977997799779978, 'recall': 0.9905292479108635, 'f1-score': 0.9842236368668696, 'support': 1795.0}, 'TI': {'precision': 0.5, 'recall': 0.2982456140350877, 'f1-score': 0.37362637362637363, 'support': 57.0}, 'accuracy': 0.9692224622030238, 'macro avg': {'precision': 0.738998899889989, 'recall': 0.6443874309729756, 'f1-score': 0.6789250052466216, 'support': 1852.0}, 'weighted avg': {'precision': 0.9632862044303783, 'recall': 0.9692224622030238, 'f1-score': 0.965430956518755, 'support': 1852.0}} | 0.0308 | {'~PC': {'precision': 0.9869706840390879, 'recall': 0.9956188389923329, 'f1-score': 0.9912758996728462, 'support': 1826.0}, 'PC': {'precision': 0.2, 'recall': 0.07692307692307693, 'f1-score': 0.1111111111111111, 'support': 26.0}, 'accuracy': 0.9827213822894169, 'macro avg': {'precision': 0.593485342019544, 'recall': 0.536270957957705, 'f1-score': 0.5511935053919786, 'support': 1852.0}, 'weighted avg': {'precision': 0.975922499489943, 'recall': 0.9827213822894169, 'f1-score': 0.9789193745634482, 'support': 1852.0}} | 0.0173 | {'f1-score': 0.6974095844234051, 'hamming': 0.01862850971922246} |
| 0.0124 | 16.0 | 1568 | 0.1324 | {'~LIN': {'precision': 0.9908108108108108, 'recall': 0.9989100817438692, 'f1-score': 0.9948439620081411, 'support': 1835.0}, 'LIN': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 17.0}, 'accuracy': 0.9897408207343412, 'macro avg': {'precision': 0.4954054054054054, 'recall': 0.4994550408719346, 'f1-score': 0.49742198100407053, 'support': 1852.0}, 'weighted avg': {'precision': 0.9817158951608196, 'recall': 0.9897408207343412, 'f1-score': 0.9857120249918677, 'support': 1852.0}} | 0.0103 | {'~SI': {'precision': 0.9840571742715778, 'recall': 0.9867695700110254, 'f1-score': 0.9854115056427195, 'support': 1814.0}, 'SI': {'precision': 0.2727272727272727, 'recall': 0.23684210526315788, 'f1-score': 0.2535211267605634, 'support': 38.0}, 'accuracy': 0.9713822894168467, 'macro avg': {'precision': 0.6283922234994253, 'recall': 0.6118058376370916, 'f1-score': 0.6194663162016414, 'support': 1852.0}, 'weighted avg': {'precision': 0.9694618523176449, 'recall': 0.9713822894168467, 'f1-score': 0.9703943164431936, 'support': 1852.0}} | 0.0286 | {'~CL': {'precision': 0.9668604651162791, 'recall': 0.995212447636146, 'f1-score': 0.9808316130934828, 'support': 1671.0}, 'CL': {'precision': 0.9393939393939394, 'recall': 0.6850828729281768, 'f1-score': 0.792332268370607, 'support': 181.0}, 'accuracy': 0.964902807775378, 'macro avg': {'precision': 0.9531272022551093, 'recall': 0.8401476602821614, 'f1-score': 0.8865819407320449, 'support': 1852.0}, 'weighted avg': {'precision': 0.9641761016412556, 'recall': 0.964902807775378, 'f1-score': 0.9624091609364415, 'support': 1852.0}} | 0.0351 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9972870320130223, 'recall': 0.9962059620596206, 'f1-score': 0.9967462039045553, 'support': 1845.0}, 'HI': {'precision': 0.2222222222222222, 'recall': 0.2857142857142857, 'f1-score': 0.25, 'support': 7.0}, 'accuracy': 0.9935205183585313, 'macro avg': {'precision': 0.6097546271176222, 'recall': 0.6409601238869531, 'f1-score': 0.6233731019522777, 'support': 1852.0}, 'weighted avg': {'precision': 0.9943575213928627, 'recall': 0.9935205183585313, 'f1-score': 0.9939237290517843, 'support': 1852.0}} | 0.0065 | {'~PL': {'precision': 0.987885462555066, 'recall': 0.9955604883462819, 'f1-score': 0.9917081260364843, 'support': 1802.0}, 'PL': {'precision': 0.7777777777777778, 'recall': 0.56, 'f1-score': 0.6511627906976745, 'support': 50.0}, 'accuracy': 0.9838012958963283, 'macro avg': {'precision': 0.8828316201664219, 'recall': 0.777780244173141, 'f1-score': 0.8214354583670793, 'support': 1852.0}, 'weighted avg': {'precision': 0.9822130088623746, 'recall': 0.9838012958963283, 'f1-score': 0.9825141375014194, 'support': 1852.0}} | 0.0162 | {'~TI': {'precision': 0.9784172661870504, 'recall': 0.984958217270195, 'f1-score': 0.9816768461965575, 'support': 1795.0}, 'TI': {'precision': 0.4, 'recall': 0.3157894736842105, 'f1-score': 0.35294117647058826, 'support': 57.0}, 'accuracy': 0.9643628509719222, 'macro avg': {'precision': 0.6892086330935252, 'recall': 0.6503738454772028, 'f1-score': 0.6673090113335729, 'support': 1852.0}, 'weighted avg': {'precision': 0.9606150069145548, 'recall': 0.9643628509719222, 'f1-score': 0.9623259103572592, 'support': 1852.0}} | 0.0356 | {'~PC': {'precision': 0.9869777536625067, 'recall': 0.9961664841182913, 'f1-score': 0.9915508312891796, 'support': 1826.0}, 'PC': {'precision': 0.2222222222222222, 'recall': 0.07692307692307693, 'f1-score': 0.11428571428571428, 'support': 26.0}, 'accuracy': 0.9832613390928726, 'macro avg': {'precision': 0.6045999879423645, 'recall': 0.5365447805206841, 'f1-score': 0.552918272787447, 'support': 1852.0}, 'weighted avg': {'precision': 0.9762414449057858, 'recall': 0.9832613390928726, 'f1-score': 0.9792350143118091, 'support': 1852.0}} | 0.0167 | {'f1-score': 0.692031088619769, 'hamming': 0.01896598272138229} |
| 0.0124 | 17.0 | 1666 | 0.1262 | {'~LIN': {'precision': 0.9918389553862894, 'recall': 0.9934604904632153, 'f1-score': 0.9926490607133134, 'support': 1835.0}, 'LIN': {'precision': 0.14285714285714285, 'recall': 0.11764705882352941, 'f1-score': 0.12903225806451613, 'support': 17.0}, 'accuracy': 0.9854211663066955, 'macro avg': {'precision': 0.5673480491217161, 'recall': 0.5555537746433723, 'f1-score': 0.5608406593889148, 'support': 1852.0}, 'weighted avg': {'precision': 0.9840459257896396, 'recall': 0.9854211663066955, 'f1-score': 0.9847216926544422, 'support': 1852.0}} | 0.0146 | {'~SI': {'precision': 0.9840484048404841, 'recall': 0.9862183020948181, 'f1-score': 0.9851321585903083, 'support': 1814.0}, 'SI': {'precision': 0.2647058823529412, 'recall': 0.23684210526315788, 'f1-score': 0.25, 'support': 38.0}, 'accuracy': 0.9708423326133909, 'macro avg': {'precision': 0.6243771435967126, 'recall': 0.611530203678988, 'f1-score': 0.6175660792951542, 'support': 1852.0}, 'weighted avg': {'precision': 0.9692886770572623, 'recall': 0.9708423326133909, 'f1-score': 0.970048453392451, 'support': 1852.0}} | 0.0292 | {'~CL': {'precision': 0.9803921568627451, 'recall': 0.9874326750448833, 'f1-score': 0.9838998211091234, 'support': 1671.0}, 'CL': {'precision': 0.8757396449704142, 'recall': 0.8176795580110497, 'f1-score': 0.8457142857142858, 'support': 181.0}, 'accuracy': 0.9708423326133909, 'macro avg': {'precision': 0.9280659009165797, 'recall': 0.9025561165279665, 'f1-score': 0.9148070534117045, 'support': 1852.0}, 'weighted avg': {'precision': 0.970164238583851, 'recall': 0.9708423326133909, 'f1-score': 0.9703946472935372, 'support': 1852.0}} | 0.0292 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9962182603997839, 'recall': 0.9994579945799458, 'f1-score': 0.9978354978354979, 'support': 1845.0}, 'HI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0}, 'accuracy': 0.9956803455723542, 'macro avg': {'precision': 0.49810913019989195, 'recall': 0.4997289972899729, 'f1-score': 0.4989177489177489, 'support': 1852.0}, 'weighted avg': {'precision': 0.9924528566077762, 'recall': 0.9956803455723542, 'f1-score': 0.9940639813749965, 'support': 1852.0}} | 0.0043 | {'~PL': {'precision': 0.9862712795167491, 'recall': 0.9966703662597114, 'f1-score': 0.9914435550648634, 'support': 1802.0}, 'PL': {'precision': 0.8064516129032258, 'recall': 0.5, 'f1-score': 0.6172839506172839, 'support': 50.0}, 'accuracy': 0.9832613390928726, 'macro avg': {'precision': 0.8963614462099874, 'recall': 0.7483351831298557, 'f1-score': 0.8043637528410736, 'support': 1852.0}, 'weighted avg': {'precision': 0.9814165368975935, 'recall': 0.9832613390928726, 'f1-score': 0.9813420538648747, 'support': 1852.0}} | 0.0167 | {'~TI': {'precision': 0.9826330532212885, 'recall': 0.977158774373259, 'f1-score': 0.9798882681564246, 'support': 1795.0}, 'TI': {'precision': 0.3880597014925373, 'recall': 0.45614035087719296, 'f1-score': 0.41935483870967744, 'support': 57.0}, 'accuracy': 0.9611231101511879, 'macro avg': {'precision': 0.6853463773569128, 'recall': 0.716649562625226, 'f1-score': 0.699621553433051, 'support': 1852.0}, 'weighted avg': {'precision': 0.964333549415382, 'recall': 0.9611231101511879, 'f1-score': 0.9626364293451587, 'support': 1852.0}} | 0.0389 | {'~PC': {'precision': 0.9889258028792912, 'recall': 0.9780941949616648, 'f1-score': 0.9834801762114538, 'support': 1826.0}, 'PC': {'precision': 0.13043478260869565, 'recall': 0.23076923076923078, 'f1-score': 0.16666666666666666, 'support': 26.0}, 'accuracy': 0.9676025917926566, 'macro avg': {'precision': 0.5596802927439934, 'recall': 0.6044317128654478, 'f1-score': 0.5750734214390603, 'support': 1852.0}, 'weighted avg': {'precision': 0.9768735531346715, 'recall': 0.9676025917926566, 'f1-score': 0.9720130319089891, 'support': 1852.0}} | 0.0324 | {'f1-score': 0.6923666119133407, 'hamming': 0.020990820734341255} |
| 0.0124 | 18.0 | 1764 | 0.1405 | {'~LIN': {'precision': 0.99185667752443, 'recall': 0.9956403269754769, 'f1-score': 0.9937449007342942, 'support': 1835.0}, 'LIN': {'precision': 0.2, 'recall': 0.11764705882352941, 'f1-score': 0.14814814814814814, 'support': 17.0}, 'accuracy': 0.9875809935205183, 'macro avg': {'precision': 0.595928338762215, 'recall': 0.5566436928995031, 'f1-score': 0.5709465244412212, 'support': 1852.0}, 'weighted avg': {'precision': 0.9845880147177802, 'recall': 0.9875809935205183, 'f1-score': 0.9859829435021319, 'support': 1852.0}} | 0.0124 | {'~SI': {'precision': 0.9814004376367614, 'recall': 0.9889746416758545, 'f1-score': 0.985172981878089, 'support': 1814.0}, 'SI': {'precision': 0.16666666666666666, 'recall': 0.10526315789473684, 'f1-score': 0.12903225806451613, 'support': 38.0}, 'accuracy': 0.9708423326133909, 'macro avg': {'precision': 0.574033552151714, 'recall': 0.5471188997852956, 'f1-score': 0.5571026199713025, 'support': 1852.0}, 'weighted avg': {'precision': 0.9646834380164246, 'recall': 0.9708423326133909, 'f1-score': 0.9676063795536205, 'support': 1852.0}} | 0.0292 | {'~CL': {'precision': 0.9860690490611751, 'recall': 0.9742669060442849, 'f1-score': 0.9801324503311258, 'support': 1671.0}, 'CL': {'precision': 0.7860696517412935, 'recall': 0.8729281767955801, 'f1-score': 0.8272251308900523, 'support': 181.0}, 'accuracy': 0.9643628509719222, 'macro avg': {'precision': 0.8860693504012342, 'recall': 0.9235975414199324, 'f1-score': 0.9036787906105891, 'support': 1852.0}, 'weighted avg': {'precision': 0.9665226716773205, 'recall': 0.9643628509719222, 'f1-score': 0.965188484446226, 'support': 1852.0}} | 0.0356 | {'~D': {'precision': 0.9989148128052089, 'recall': 0.999457111834962, 'f1-score': 0.9991858887381275, 'support': 1842.0}, 'D': {'precision': 0.8888888888888888, 'recall': 0.8, 'f1-score': 0.8421052631578947, 'support': 10.0}, 'accuracy': 0.9983801295896328, 'macro avg': {'precision': 0.9439018508470489, 'recall': 0.899728555917481, 'f1-score': 0.9206455759480111, 'support': 1852.0}, 'weighted avg': {'precision': 0.9983207203434578, 'recall': 0.9983801295896328, 'f1-score': 0.9983377212133963, 'support': 1852.0}} | 0.0016 | {'~HI': {'precision': 0.9972870320130223, 'recall': 0.9962059620596206, 'f1-score': 0.9967462039045553, 'support': 1845.0}, 'HI': {'precision': 0.2222222222222222, 'recall': 0.2857142857142857, 'f1-score': 0.25, 'support': 7.0}, 'accuracy': 0.9935205183585313, 'macro avg': {'precision': 0.6097546271176222, 'recall': 0.6409601238869531, 'f1-score': 0.6233731019522777, 'support': 1852.0}, 'weighted avg': {'precision': 0.9943575213928627, 'recall': 0.9935205183585313, 'f1-score': 0.9939237290517843, 'support': 1852.0}} | 0.0065 | {'~PL': {'precision': 0.9894503053858967, 'recall': 0.9889012208657048, 'f1-score': 0.9891756869275604, 'support': 1802.0}, 'PL': {'precision': 0.6078431372549019, 'recall': 0.62, 'f1-score': 0.6138613861386139, 'support': 50.0}, 'accuracy': 0.9789416846652268, 'macro avg': {'precision': 0.7986467213203994, 'recall': 0.8044506104328524, 'f1-score': 0.8015185365330871, 'support': 1852.0}, 'weighted avg': {'precision': 0.9791477360519066, 'recall': 0.9789416846652268, 'f1-score': 0.9790430114202995, 'support': 1852.0}} | 0.0211 | {'~TI': {'precision': 0.9854312354312355, 'recall': 0.9420612813370474, 'f1-score': 0.9632583309598405, 'support': 1795.0}, 'TI': {'precision': 0.23529411764705882, 'recall': 0.5614035087719298, 'f1-score': 0.3316062176165803, 'support': 57.0}, 'accuracy': 0.9303455723542117, 'macro avg': {'precision': 0.6103626765391471, 'recall': 0.7517323950544885, 'f1-score': 0.6474322742882104, 'support': 1852.0}, 'weighted avg': {'precision': 0.9623438619357182, 'recall': 0.9303455723542117, 'f1-score': 0.9438176341668784, 'support': 1852.0}} | 0.0697 | {'~PC': {'precision': 0.9868780754510662, 'recall': 0.988499452354874, 'f1-score': 0.987688098495212, 'support': 1826.0}, 'PC': {'precision': 0.08695652173913043, 'recall': 0.07692307692307693, 'f1-score': 0.08163265306122448, 'support': 26.0}, 'accuracy': 0.9757019438444925, 'macro avg': {'precision': 0.5369172985950983, 'recall': 0.5327112646389754, 'f1-score': 0.5346603757782182, 'support': 1852.0}, 'weighted avg': {'precision': 0.9742441875479829, 'recall': 0.9757019438444925, 'f1-score': 0.9749680976413871, 'support': 1852.0}} | 0.0243 | {'f1-score': 0.6949197249403647, 'hamming': 0.025040496760259177} |
| 0.0124 | 19.0 | 1862 | 0.1226 | {'~LIN': {'precision': 0.9913373037357878, 'recall': 0.9978201634877384, 'f1-score': 0.9945681694731124, 'support': 1835.0}, 'LIN': {'precision': 0.2, 'recall': 0.058823529411764705, 'f1-score': 0.09090909090909091, 'support': 17.0}, 'accuracy': 0.9892008639308856, 'macro avg': {'precision': 0.5956686518678939, 'recall': 0.5283218464497516, 'f1-score': 0.5427386301911017, 'support': 1852.0}, 'weighted avg': {'precision': 0.9840734083991203, 'recall': 0.9892008639308856, 'f1-score': 0.9862732427260344, 'support': 1852.0}} | 0.0108 | {'~SI': {'precision': 0.9856274184632393, 'recall': 0.9829106945975744, 'f1-score': 0.9842671818934584, 'support': 1814.0}, 'SI': {'precision': 0.27906976744186046, 'recall': 0.3157894736842105, 'f1-score': 0.2962962962962963, 'support': 38.0}, 'accuracy': 0.9692224622030238, 'macro avg': {'precision': 0.6323485929525499, 'recall': 0.6493500841408925, 'f1-score': 0.6402817390948774, 'support': 1852.0}, 'weighted avg': {'precision': 0.9711300152565372, 'recall': 0.9692224622030238, 'f1-score': 0.9701511486036678, 'support': 1852.0}} | 0.0308 | {'~CL': {'precision': 0.9764844209288653, 'recall': 0.9940155595451825, 'f1-score': 0.9851720047449585, 'support': 1671.0}, 'CL': {'precision': 0.9337748344370861, 'recall': 0.7790055248618785, 'f1-score': 0.8493975903614458, 'support': 181.0}, 'accuracy': 0.9730021598272138, 'macro avg': {'precision': 0.9551296276829757, 'recall': 0.8865105422035304, 'f1-score': 0.9172847975532021, 'support': 1852.0}, 'weighted avg': {'precision': 0.9723103198732433, 'recall': 0.9730021598272138, 'f1-score': 0.9719024750454898, 'support': 1852.0}} | 0.0270 | {'~D': {'precision': 0.9983731019522777, 'recall': 0.999457111834962, 'f1-score': 0.9989148128052089, 'support': 1842.0}, 'D': {'precision': 0.875, 'recall': 0.7, 'f1-score': 0.7777777777777778, 'support': 10.0}, 'accuracy': 0.9978401727861771, 'macro avg': {'precision': 0.9366865509761388, 'recall': 0.849728555917481, 'f1-score': 0.8883462952914933, 'support': 1852.0}, 'weighted avg': {'precision': 0.997706940494652, 'recall': 0.9978401727861771, 'f1-score': 0.9977207683396181, 'support': 1852.0}} | 0.0022 | {'~HI': {'precision': 0.9972885032537961, 'recall': 0.9967479674796748, 'f1-score': 0.9970181621035511, 'support': 1845.0}, 'HI': {'precision': 0.25, 'recall': 0.2857142857142857, 'f1-score': 0.26666666666666666, 'support': 7.0}, 'accuracy': 0.994060475161987, 'macro avg': {'precision': 0.6236442516268981, 'recall': 0.6412311265969802, 'f1-score': 0.6318424143851089, 'support': 1852.0}, 'weighted avg': {'precision': 0.9944639786734633, 'recall': 0.994060475161987, 'f1-score': 0.9942576542914245, 'support': 1852.0}} | 0.0059 | {'~PL': {'precision': 0.9878520154610713, 'recall': 0.9927857935627081, 'f1-score': 0.9903127594796568, 'support': 1802.0}, 'PL': {'precision': 0.6829268292682927, 'recall': 0.56, 'f1-score': 0.6153846153846154, 'support': 50.0}, 'accuracy': 0.9811015118790497, 'macro avg': {'precision': 0.835389422364682, 'recall': 0.776392896781354, 'f1-score': 0.8028486874321361, 'support': 1852.0}, 'weighted avg': {'precision': 0.9796196940195816, 'recall': 0.9811015118790497, 'f1-score': 0.9801905093690995, 'support': 1852.0}} | 0.0189 | {'~TI': {'precision': 0.9813559322033898, 'recall': 0.9676880222841225, 'f1-score': 0.9744740532959327, 'support': 1795.0}, 'TI': {'precision': 0.2926829268292683, 'recall': 0.42105263157894735, 'f1-score': 0.34532374100719426, 'support': 57.0}, 'accuracy': 0.9508639308855291, 'macro avg': {'precision': 0.6370194295163291, 'recall': 0.6943703269315349, 'f1-score': 0.6598988971515635, 'support': 1852.0}, 'weighted avg': {'precision': 0.9601602727507306, 'recall': 0.9508639308855291, 'f1-score': 0.9551103557794867, 'support': 1852.0}} | 0.0491 | {'~PC': {'precision': 0.9890710382513661, 'recall': 0.9912376779846659, 'f1-score': 0.9901531728665208, 'support': 1826.0}, 'PC': {'precision': 0.2727272727272727, 'recall': 0.23076923076923078, 'f1-score': 0.25, 'support': 26.0}, 'accuracy': 0.980561555075594, 'macro avg': {'precision': 0.6308991554893194, 'recall': 0.6110034543769484, 'f1-score': 0.6200765864332605, 'support': 1852.0}, 'weighted avg': {'precision': 0.9790143763163626, 'recall': 0.980561555075594, 'f1-score': 0.9797622535930167, 'support': 1852.0}} | 0.0194 | {'f1-score': 0.7129147559415929, 'hamming': 0.020518358531317494} |
| 0.0124 | 20.0 | 1960 | 0.1336 | {'~LIN': {'precision': 0.9913560237709347, 'recall': 1.0, 'f1-score': 0.9956592512208355, 'support': 1835.0}, 'LIN': {'precision': 1.0, 'recall': 0.058823529411764705, 'f1-score': 0.1111111111111111, 'support': 17.0}, 'accuracy': 0.9913606911447084, 'macro avg': {'precision': 0.9956780118854673, 'recall': 0.5294117647058824, 'f1-score': 0.5533851811659733, 'support': 1852.0}, 'weighted avg': {'precision': 0.9914353691250891, 'recall': 0.9913606911447084, 'f1-score': 0.9875397488548175, 'support': 1852.0}} | 0.0086 | {'~SI': {'precision': 0.9825136612021858, 'recall': 0.9911797133406836, 'f1-score': 0.986827661909989, 'support': 1814.0}, 'SI': {'precision': 0.2727272727272727, 'recall': 0.15789473684210525, 'f1-score': 0.2, 'support': 38.0}, 'accuracy': 0.9740820734341252, 'macro avg': {'precision': 0.6276204669647292, 'recall': 0.5745372250913944, 'f1-score': 0.5934138309549946, 'support': 1852.0}, 'weighted avg': {'precision': 0.9679500096028084, 'recall': 0.9740820734341252, 'f1-score': 0.9706832498405615, 'support': 1852.0}} | 0.0259 | {'~CL': {'precision': 0.9787234042553191, 'recall': 0.9910233393177738, 'f1-score': 0.9848349687778769, 'support': 1671.0}, 'CL': {'precision': 0.90625, 'recall': 0.8011049723756906, 'f1-score': 0.8504398826979472, 'support': 181.0}, 'accuracy': 0.9724622030237581, 'macro avg': {'precision': 0.9424867021276595, 'recall': 0.8960641558467322, 'f1-score': 0.9176374257379121, 'support': 1852.0}, 'weighted avg': {'precision': 0.9716404203621156, 'recall': 0.9724622030237581, 'f1-score': 0.9717002438424195, 'support': 1852.0}} | 0.0275 | {'~D': {'precision': 0.9983731019522777, 'recall': 0.999457111834962, 'f1-score': 0.9989148128052089, 'support': 1842.0}, 'D': {'precision': 0.875, 'recall': 0.7, 'f1-score': 0.7777777777777778, 'support': 10.0}, 'accuracy': 0.9978401727861771, 'macro avg': {'precision': 0.9366865509761388, 'recall': 0.849728555917481, 'f1-score': 0.8883462952914933, 'support': 1852.0}, 'weighted avg': {'precision': 0.997706940494652, 'recall': 0.9978401727861771, 'f1-score': 0.9977207683396181, 'support': 1852.0}} | 0.0022 | {'~HI': {'precision': 0.9967514889009204, 'recall': 0.9978319783197832, 'f1-score': 0.9972914409534128, 'support': 1845.0}, 'HI': {'precision': 0.2, 'recall': 0.14285714285714285, 'f1-score': 0.16666666666666666, 'support': 7.0}, 'accuracy': 0.9946004319654428, 'macro avg': {'precision': 0.5983757444504602, 'recall': 0.5703445605884631, 'f1-score': 0.5819790538100398, 'support': 1852.0}, 'weighted avg': {'precision': 0.9937400091912517, 'recall': 0.9946004319654428, 'f1-score': 0.9941519304674478, 'support': 1852.0}} | 0.0054 | {'~PL': {'precision': 0.9851811196487377, 'recall': 0.9961154273029966, 'f1-score': 0.9906181015452539, 'support': 1802.0}, 'PL': {'precision': 0.7666666666666667, 'recall': 0.46, 'f1-score': 0.575, 'support': 50.0}, 'accuracy': 0.9816414686825053, 'macro avg': {'precision': 0.8759238931577022, 'recall': 0.7280577136514983, 'f1-score': 0.7828090507726269, 'support': 1852.0}, 'weighted avg': {'precision': 0.9792817013716839, 'recall': 0.9816414686825053, 'f1-score': 0.9793973104668182, 'support': 1852.0}} | 0.0184 | {'~TI': {'precision': 0.9824263038548753, 'recall': 0.9654596100278552, 'f1-score': 0.9738690643439168, 'support': 1795.0}, 'TI': {'precision': 0.29545454545454547, 'recall': 0.45614035087719296, 'f1-score': 0.3586206896551724, 'support': 57.0}, 'accuracy': 0.9497840172786177, 'macro avg': {'precision': 0.6389404246547103, 'recall': 0.710799980452524, 'f1-score': 0.6662448769995446, 'support': 1852.0}, 'weighted avg': {'precision': 0.9612830045952538, 'recall': 0.9497840172786177, 'f1-score': 0.9549332342374058, 'support': 1852.0}} | 0.0502 | {'~PC': {'precision': 0.9874522640480087, 'recall': 0.9912376779846659, 'f1-score': 0.9893413500956546, 'support': 1826.0}, 'PC': {'precision': 0.15789473684210525, 'recall': 0.11538461538461539, 'f1-score': 0.13333333333333333, 'support': 26.0}, 'accuracy': 0.9789416846652268, 'macro avg': {'precision': 0.5726735004450569, 'recall': 0.5533111466846407, 'f1-score': 0.561337341714494, 'support': 1852.0}, 'weighted avg': {'precision': 0.9758062080505177, 'recall': 0.9789416846652268, 'f1-score': 0.9773239589316048, 'support': 1852.0}} | 0.0211 | {'f1-score': 0.6931441320558849, 'hamming': 0.01991090712742981} |
| 0.003 | 21.0 | 2058 | 0.1408 | {'~LIN': {'precision': 0.992399565689468, 'recall': 0.9961852861035423, 'f1-score': 0.9942888224095731, 'support': 1835.0}, 'LIN': {'precision': 0.3, 'recall': 0.17647058823529413, 'f1-score': 0.2222222222222222, 'support': 17.0}, 'accuracy': 0.9886609071274298, 'macro avg': {'precision': 0.646199782844734, 'recall': 0.5863279371694182, 'f1-score': 0.6082555223158976, 'support': 1852.0}, 'weighted avg': {'precision': 0.986043846134003, 'recall': 0.9886609071274298, 'f1-score': 0.9872018179802076, 'support': 1852.0}} | 0.0113 | {'~SI': {'precision': 0.9830415754923414, 'recall': 0.9906284454244763, 'f1-score': 0.9868204283360791, 'support': 1814.0}, 'SI': {'precision': 0.2916666666666667, 'recall': 0.18421052631578946, 'f1-score': 0.22580645161290322, 'support': 38.0}, 'accuracy': 0.9740820734341252, 'macro avg': {'precision': 0.6373541210795041, 'recall': 0.5874194858701328, 'f1-score': 0.6063134399744912, 'support': 1852.0}, 'weighted avg': {'precision': 0.9688556972334992, 'recall': 0.9740820734341252, 'f1-score': 0.9712056707143292, 'support': 1852.0}} | 0.0259 | {'~CL': {'precision': 0.9741935483870968, 'recall': 0.9940155595451825, 'f1-score': 0.9840047393364929, 'support': 1671.0}, 'CL': {'precision': 0.9319727891156463, 'recall': 0.7569060773480663, 'f1-score': 0.8353658536585366, 'support': 181.0}, 'accuracy': 0.9708423326133909, 'macro avg': {'precision': 0.9530831687513716, 'recall': 0.8754608184466244, 'f1-score': 0.9096852964975147, 'support': 1852.0}, 'weighted avg': {'precision': 0.9700672214820577, 'recall': 0.9708423326133909, 'f1-score': 0.9694779367945329, 'support': 1852.0}} | 0.0292 | {'~D': {'precision': 0.9983731019522777, 'recall': 0.999457111834962, 'f1-score': 0.9989148128052089, 'support': 1842.0}, 'D': {'precision': 0.875, 'recall': 0.7, 'f1-score': 0.7777777777777778, 'support': 10.0}, 'accuracy': 0.9978401727861771, 'macro avg': {'precision': 0.9366865509761388, 'recall': 0.849728555917481, 'f1-score': 0.8883462952914933, 'support': 1852.0}, 'weighted avg': {'precision': 0.997706940494652, 'recall': 0.9978401727861771, 'f1-score': 0.9977207683396181, 'support': 1852.0}} | 0.0022 | {'~HI': {'precision': 0.9962141698215251, 'recall': 0.9983739837398374, 'f1-score': 0.9972929074174337, 'support': 1845.0}, 'HI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0}, 'accuracy': 0.9946004319654428, 'macro avg': {'precision': 0.49810708491076255, 'recall': 0.4991869918699187, 'f1-score': 0.49864645370871685, 'support': 1852.0}, 'weighted avg': {'precision': 0.9924487814906662, 'recall': 0.9946004319654428, 'f1-score': 0.9935234417846464, 'support': 1852.0}} | 0.0054 | {'~PL': {'precision': 0.9846491228070176, 'recall': 0.9966703662597114, 'f1-score': 0.9906232763375621, 'support': 1802.0}, 'PL': {'precision': 0.7857142857142857, 'recall': 0.44, 'f1-score': 0.5641025641025641, 'support': 50.0}, 'accuracy': 0.9816414686825053, 'macro avg': {'precision': 0.8851817042606516, 'recall': 0.7183351831298557, 'f1-score': 0.7773629202200631, 'support': 1852.0}, 'weighted avg': {'precision': 0.9792783118703887, 'recall': 0.9816414686825053, 'f1-score': 0.9791081383182586, 'support': 1852.0}} | 0.0184 | {'~TI': {'precision': 0.9789356984478935, 'recall': 0.9838440111420613, 'f1-score': 0.981383717699361, 'support': 1795.0}, 'TI': {'precision': 0.3958333333333333, 'recall': 0.3333333333333333, 'f1-score': 0.3619047619047619, 'support': 57.0}, 'accuracy': 0.9638228941684666, 'macro avg': {'precision': 0.6873845158906134, 'recall': 0.6585886722376973, 'f1-score': 0.6716442398020614, 'support': 1852.0}, 'weighted avg': {'precision': 0.9609892433660738, 'recall': 0.9638228941684666, 'f1-score': 0.9623176807229612, 'support': 1852.0}} | 0.0362 | {'~PC': {'precision': 0.9869918699186991, 'recall': 0.9972617743702081, 'f1-score': 0.9921002451648052, 'support': 1826.0}, 'PC': {'precision': 0.2857142857142857, 'recall': 0.07692307692307693, 'f1-score': 0.12121212121212122, 'support': 26.0}, 'accuracy': 0.984341252699784, 'macro avg': {'precision': 0.6363530778164924, 'recall': 0.5370924256466425, 'f1-score': 0.5566561831884632, 'support': 1852.0}, 'weighted avg': {'precision': 0.9771467202484428, 'recall': 0.984341252699784, 'f1-score': 0.9798739540078021, 'support': 1852.0}} | 0.0157 | {'f1-score': 0.6896137938748377, 'hamming': 0.01802105831533477} |
| 0.003 | 22.0 | 2156 | 0.1482 | {'~LIN': {'precision': 0.9913560237709347, 'recall': 1.0, 'f1-score': 0.9956592512208355, 'support': 1835.0}, 'LIN': {'precision': 1.0, 'recall': 0.058823529411764705, 'f1-score': 0.1111111111111111, 'support': 17.0}, 'accuracy': 0.9913606911447084, 'macro avg': {'precision': 0.9956780118854673, 'recall': 0.5294117647058824, 'f1-score': 0.5533851811659733, 'support': 1852.0}, 'weighted avg': {'precision': 0.9914353691250891, 'recall': 0.9913606911447084, 'f1-score': 0.9875397488548175, 'support': 1852.0}} | 0.0086 | {'~SI': {'precision': 0.983032293377121, 'recall': 0.9900771775082691, 'f1-score': 0.9865421587475968, 'support': 1814.0}, 'SI': {'precision': 0.28, 'recall': 0.18421052631578946, 'f1-score': 0.2222222222222222, 'support': 38.0}, 'accuracy': 0.9735421166306696, 'macro avg': {'precision': 0.6315161466885605, 'recall': 0.5871438519120292, 'f1-score': 0.6043821904849095, 'support': 1852.0}, 'weighted avg': {'precision': 0.9686072247225148, 'recall': 0.9735421166306696, 'f1-score': 0.9708595682573352, 'support': 1852.0}} | 0.0265 | {'~CL': {'precision': 0.9742086752637749, 'recall': 0.9946140035906643, 'f1-score': 0.9843055966834469, 'support': 1671.0}, 'CL': {'precision': 0.9383561643835616, 'recall': 0.7569060773480663, 'f1-score': 0.8379204892966361, 'support': 181.0}, 'accuracy': 0.9713822894168467, 'macro avg': {'precision': 0.9562824198236682, 'recall': 0.8757600404693653, 'f1-score': 0.9111130429900415, 'support': 1852.0}, 'weighted avg': {'precision': 0.9707047311658706, 'recall': 0.9713822894168467, 'f1-score': 0.9699990608103299, 'support': 1852.0}} | 0.0286 | {'~D': {'precision': 0.9983731019522777, 'recall': 0.999457111834962, 'f1-score': 0.9989148128052089, 'support': 1842.0}, 'D': {'precision': 0.875, 'recall': 0.7, 'f1-score': 0.7777777777777778, 'support': 10.0}, 'accuracy': 0.9978401727861771, 'macro avg': {'precision': 0.9366865509761388, 'recall': 0.849728555917481, 'f1-score': 0.8883462952914933, 'support': 1852.0}, 'weighted avg': {'precision': 0.997706940494652, 'recall': 0.9978401727861771, 'f1-score': 0.9977207683396181, 'support': 1852.0}} | 0.0022 | {'~HI': {'precision': 0.9967532467532467, 'recall': 0.9983739837398374, 'f1-score': 0.9975629569455727, 'support': 1845.0}, 'HI': {'precision': 0.25, 'recall': 0.14285714285714285, 'f1-score': 0.18181818181818182, 'support': 7.0}, 'accuracy': 0.9951403887688985, 'macro avg': {'precision': 0.6233766233766234, 'recall': 0.5706155632984902, 'f1-score': 0.5896905693818772, 'support': 1852.0}, 'weighted avg': {'precision': 0.9939307452806373, 'recall': 0.9951403887688985, 'f1-score': 0.9944796883570783, 'support': 1852.0}} | 0.0049 | {'~PL': {'precision': 0.9846659364731654, 'recall': 0.9977802441731409, 'f1-score': 0.9911797133406836, 'support': 1802.0}, 'PL': {'precision': 0.8461538461538461, 'recall': 0.44, 'f1-score': 0.5789473684210527, 'support': 50.0}, 'accuracy': 0.9827213822894169, 'macro avg': {'precision': 0.9154098913135058, 'recall': 0.7188901220865704, 'f1-score': 0.7850635408808682, 'support': 1852.0}, 'weighted avg': {'precision': 0.9809264091967259, 'recall': 0.9827213822894169, 'f1-score': 0.9800503303784905, 'support': 1852.0}} | 0.0173 | {'~TI': {'precision': 0.9824462061155153, 'recall': 0.9665738161559888, 'f1-score': 0.9744453805110924, 'support': 1795.0}, 'TI': {'precision': 0.3023255813953488, 'recall': 0.45614035087719296, 'f1-score': 0.36363636363636365, 'support': 57.0}, 'accuracy': 0.9508639308855291, 'macro avg': {'precision': 0.6423858937554321, 'recall': 0.7113570835165909, 'f1-score': 0.669040872073728, 'support': 1852.0}, 'weighted avg': {'precision': 0.9615137678816872, 'recall': 0.9508639308855291, 'f1-score': 0.9556461829074965, 'support': 1852.0}} | 0.0491 | {'~PC': {'precision': 0.9864425162689805, 'recall': 0.9961664841182913, 'f1-score': 0.9912806539509537, 'support': 1826.0}, 'PC': {'precision': 0.125, 'recall': 0.038461538461538464, 'f1-score': 0.058823529411764705, 'support': 26.0}, 'accuracy': 0.9827213822894169, 'macro avg': {'precision': 0.5557212581344902, 'recall': 0.5173140112899149, 'f1-score': 0.5250520916813591, 'support': 1852.0}, 'weighted avg': {'precision': 0.9743488308353987, 'recall': 0.9827213822894169, 'f1-score': 0.9781900031744856, 'support': 1852.0}} | 0.0173 | {'f1-score': 0.6907592229937813, 'hamming': 0.01930345572354212} |
| 0.003 | 23.0 | 2254 | 0.1423 | {'~LIN': {'precision': 0.9918918918918919, 'recall': 1.0, 'f1-score': 0.9959294436906377, 'support': 1835.0}, 'LIN': {'precision': 1.0, 'recall': 0.11764705882352941, 'f1-score': 0.21052631578947367, 'support': 17.0}, 'accuracy': 0.9919006479481641, 'macro avg': {'precision': 0.9959459459459459, 'recall': 0.5588235294117647, 'f1-score': 0.6032278797400557, 'support': 1852.0}, 'weighted avg': {'precision': 0.9919663183702061, 'recall': 0.9919006479481641, 'f1-score': 0.9887200197304219, 'support': 1852.0}} | 0.0081 | {'~SI': {'precision': 0.9830415754923414, 'recall': 0.9906284454244763, 'f1-score': 0.9868204283360791, 'support': 1814.0}, 'SI': {'precision': 0.2916666666666667, 'recall': 0.18421052631578946, 'f1-score': 0.22580645161290322, 'support': 38.0}, 'accuracy': 0.9740820734341252, 'macro avg': {'precision': 0.6373541210795041, 'recall': 0.5874194858701328, 'f1-score': 0.6063134399744912, 'support': 1852.0}, 'weighted avg': {'precision': 0.9688556972334992, 'recall': 0.9740820734341252, 'f1-score': 0.9712056707143292, 'support': 1852.0}} | 0.0259 | {'~CL': {'precision': 0.9736379613356766, 'recall': 0.9946140035906643, 'f1-score': 0.9840142095914742, 'support': 1671.0}, 'CL': {'precision': 0.9379310344827586, 'recall': 0.7513812154696132, 'f1-score': 0.8343558282208589, 'support': 181.0}, 'accuracy': 0.9708423326133909, 'macro avg': {'precision': 0.9557844979092176, 'recall': 0.8729976095301388, 'f1-score': 0.9091850189061665, 'support': 1852.0}, 'weighted avg': {'precision': 0.9701482454823407, 'recall': 0.9708423326133909, 'f1-score': 0.9693877695115167, 'support': 1852.0}} | 0.0292 | {'~D': {'precision': 0.998371335504886, 'recall': 0.998371335504886, 'f1-score': 0.998371335504886, 'support': 1842.0}, 'D': {'precision': 0.7, 'recall': 0.7, 'f1-score': 0.7, 'support': 10.0}, 'accuracy': 0.9967602591792657, 'macro avg': {'precision': 0.849185667752443, 'recall': 0.849185667752443, 'f1-score': 0.849185667752443, 'support': 1852.0}, 'weighted avg': {'precision': 0.9967602591792657, 'recall': 0.9967602591792657, 'f1-score': 0.9967602591792657, 'support': 1852.0}} | 0.0032 | {'~HI': {'precision': 0.9967514889009204, 'recall': 0.9978319783197832, 'f1-score': 0.9972914409534128, 'support': 1845.0}, 'HI': {'precision': 0.2, 'recall': 0.14285714285714285, 'f1-score': 0.16666666666666666, 'support': 7.0}, 'accuracy': 0.9946004319654428, 'macro avg': {'precision': 0.5983757444504602, 'recall': 0.5703445605884631, 'f1-score': 0.5819790538100398, 'support': 1852.0}, 'weighted avg': {'precision': 0.9937400091912517, 'recall': 0.9946004319654428, 'f1-score': 0.9941519304674478, 'support': 1852.0}} | 0.0054 | {'~PL': {'precision': 0.9867986798679867, 'recall': 0.9955604883462819, 'f1-score': 0.9911602209944751, 'support': 1802.0}, 'PL': {'precision': 0.7647058823529411, 'recall': 0.52, 'f1-score': 0.6190476190476191, 'support': 50.0}, 'accuracy': 0.9827213822894169, 'macro avg': {'precision': 0.8757522811104639, 'recall': 0.7577802441731409, 'f1-score': 0.805103920021047, 'support': 1852.0}, 'weighted avg': {'precision': 0.9808026540171486, 'recall': 0.9827213822894169, 'f1-score': 0.9811139844408343, 'support': 1852.0}} | 0.0173 | {'~TI': {'precision': 0.9815436241610739, 'recall': 0.9777158774373259, 'f1-score': 0.9796260117220207, 'support': 1795.0}, 'TI': {'precision': 0.375, 'recall': 0.42105263157894735, 'f1-score': 0.39669421487603307, 'support': 57.0}, 'accuracy': 0.9605831533477321, 'macro avg': {'precision': 0.6782718120805369, 'recall': 0.6993842545081366, 'f1-score': 0.6881601132990269, 'support': 1852.0}, 'weighted avg': {'precision': 0.9628757048429414, 'recall': 0.9605831533477321, 'f1-score': 0.9616848063115341, 'support': 1852.0}} | 0.0394 | {'~PC': {'precision': 0.9869565217391304, 'recall': 0.9945235487404163, 'f1-score': 0.9907255864702673, 'support': 1826.0}, 'PC': {'precision': 0.16666666666666666, 'recall': 0.07692307692307693, 'f1-score': 0.10526315789473684, 'support': 26.0}, 'accuracy': 0.9816414686825053, 'macro avg': {'precision': 0.5768115942028985, 'recall': 0.5357233128317466, 'f1-score': 0.547994372182502, 'support': 1852.0}, 'weighted avg': {'precision': 0.9754405734497761, 'recall': 0.9816414686825053, 'f1-score': 0.9782946884449089, 'support': 1852.0}} | 0.0184 | {'f1-score': 0.6988936832107215, 'hamming': 0.0183585313174946} |
| 0.003 | 24.0 | 2352 | 0.1409 | {'~LIN': {'precision': 0.9913560237709347, 'recall': 1.0, 'f1-score': 0.9956592512208355, 'support': 1835.0}, 'LIN': {'precision': 1.0, 'recall': 0.058823529411764705, 'f1-score': 0.1111111111111111, 'support': 17.0}, 'accuracy': 0.9913606911447084, 'macro avg': {'precision': 0.9956780118854673, 'recall': 0.5294117647058824, 'f1-score': 0.5533851811659733, 'support': 1852.0}, 'weighted avg': {'precision': 0.9914353691250891, 'recall': 0.9913606911447084, 'f1-score': 0.9875397488548175, 'support': 1852.0}} | 0.0086 | {'~SI': {'precision': 0.9835074216602528, 'recall': 0.9862183020948181, 'f1-score': 0.9848609964216901, 'support': 1814.0}, 'SI': {'precision': 0.24242424242424243, 'recall': 0.21052631578947367, 'f1-score': 0.22535211267605634, 'support': 38.0}, 'accuracy': 0.9703023758099352, 'macro avg': {'precision': 0.6129658320422476, 'recall': 0.5983723089421459, 'f1-score': 0.6051065545488732, 'support': 1852.0}, 'weighted avg': {'precision': 0.9683016112871597, 'recall': 0.9703023758099352, 'f1-score': 0.9692771208372764, 'support': 1852.0}} | 0.0297 | {'~CL': {'precision': 0.9781453041937389, 'recall': 0.9910233393177738, 'f1-score': 0.9845422116527943, 'support': 1671.0}, 'CL': {'precision': 0.9056603773584906, 'recall': 0.7955801104972375, 'f1-score': 0.8470588235294118, 'support': 181.0}, 'accuracy': 0.9719222462203023, 'macro avg': {'precision': 0.9419028407761147, 'recall': 0.8933017249075057, 'f1-score': 0.915800517591103, 'support': 1852.0}, 'weighted avg': {'precision': 0.9710611941736633, 'recall': 0.9719222462203023, 'f1-score': 0.9711056602217294, 'support': 1852.0}} | 0.0281 | {'~D': {'precision': 0.998371335504886, 'recall': 0.998371335504886, 'f1-score': 0.998371335504886, 'support': 1842.0}, 'D': {'precision': 0.7, 'recall': 0.7, 'f1-score': 0.7, 'support': 10.0}, 'accuracy': 0.9967602591792657, 'macro avg': {'precision': 0.849185667752443, 'recall': 0.849185667752443, 'f1-score': 0.849185667752443, 'support': 1852.0}, 'weighted avg': {'precision': 0.9967602591792657, 'recall': 0.9967602591792657, 'f1-score': 0.9967602591792657, 'support': 1852.0}} | 0.0032 | {'~HI': {'precision': 0.9967497291440953, 'recall': 0.997289972899729, 'f1-score': 0.9970197778379842, 'support': 1845.0}, 'HI': {'precision': 0.16666666666666666, 'recall': 0.14285714285714285, 'f1-score': 0.15384615384615385, 'support': 7.0}, 'accuracy': 0.994060475161987, 'macro avg': {'precision': 0.581708197905381, 'recall': 0.5700735578784359, 'f1-score': 0.575432965842069, 'support': 1852.0}, 'weighted avg': {'precision': 0.993612266164969, 'recall': 0.994060475161987, 'f1-score': 0.9938328364946026, 'support': 1852.0}} | 0.0059 | {'~PL': {'precision': 0.9851973684210527, 'recall': 0.9972253052164262, 'f1-score': 0.9911748483177054, 'support': 1802.0}, 'PL': {'precision': 0.8214285714285714, 'recall': 0.46, 'f1-score': 0.5897435897435898, 'support': 50.0}, 'accuracy': 0.9827213822894169, 'macro avg': {'precision': 0.903312969924812, 'recall': 0.7286126526082131, 'f1-score': 0.7904592190306476, 'support': 1852.0}, 'weighted avg': {'precision': 0.9807759646145602, 'recall': 0.9827213822894169, 'f1-score': 0.980337071358361, 'support': 1852.0}} | 0.0173 | {'~TI': {'precision': 0.9820627802690582, 'recall': 0.9760445682451253, 'f1-score': 0.9790444258172674, 'support': 1795.0}, 'TI': {'precision': 0.36764705882352944, 'recall': 0.43859649122807015, 'f1-score': 0.4, 'support': 57.0}, 'accuracy': 0.9595032397408207, 'macro avg': {'precision': 0.6748549195462938, 'recall': 0.7073205297365978, 'f1-score': 0.6895222129086337, 'support': 1852.0}, 'weighted avg': {'precision': 0.963152577179212, 'recall': 0.9595032397408207, 'f1-score': 0.961222864115548, 'support': 1852.0}} | 0.0405 | {'~PC': {'precision': 0.9879978177850518, 'recall': 0.9917853231106243, 'f1-score': 0.9898879475266467, 'support': 1826.0}, 'PC': {'precision': 0.21052631578947367, 'recall': 0.15384615384615385, 'f1-score': 0.17777777777777778, 'support': 26.0}, 'accuracy': 0.9800215982721382, 'macro avg': {'precision': 0.5992620667872628, 'recall': 0.5728157384783891, 'f1-score': 0.5838328626522122, 'support': 1852.0}, 'weighted avg': {'precision': 0.97708299108317, 'recall': 0.9800215982721382, 'f1-score': 0.9784868328325481, 'support': 1852.0}} | 0.0200 | {'f1-score': 0.6953406476864944, 'hamming': 0.019168466522678188} |
| 0.003 | 25.0 | 2450 | 0.1429 | {'~LIN': {'precision': 0.9913560237709347, 'recall': 1.0, 'f1-score': 0.9956592512208355, 'support': 1835.0}, 'LIN': {'precision': 1.0, 'recall': 0.058823529411764705, 'f1-score': 0.1111111111111111, 'support': 17.0}, 'accuracy': 0.9913606911447084, 'macro avg': {'precision': 0.9956780118854673, 'recall': 0.5294117647058824, 'f1-score': 0.5533851811659733, 'support': 1852.0}, 'weighted avg': {'precision': 0.9914353691250891, 'recall': 0.9913606911447084, 'f1-score': 0.9875397488548175, 'support': 1852.0}} | 0.0086 | {'~SI': {'precision': 0.983013698630137, 'recall': 0.9889746416758545, 'f1-score': 0.9859851607584501, 'support': 1814.0}, 'SI': {'precision': 0.25925925925925924, 'recall': 0.18421052631578946, 'f1-score': 0.2153846153846154, 'support': 38.0}, 'accuracy': 0.9724622030237581, 'macro avg': {'precision': 0.6211364789446981, 'recall': 0.5865925839958219, 'f1-score': 0.6006848880715328, 'support': 1852.0}, 'weighted avg': {'precision': 0.9681634455544926, 'recall': 0.9724622030237581, 'f1-score': 0.970173702484041, 'support': 1852.0}} | 0.0275 | {'~CL': {'precision': 0.9747504403992954, 'recall': 0.9934171154997008, 'f1-score': 0.983995257854179, 'support': 1671.0}, 'CL': {'precision': 0.9261744966442953, 'recall': 0.7624309392265194, 'f1-score': 0.8363636363636363, 'support': 181.0}, 'accuracy': 0.9708423326133909, 'macro avg': {'precision': 0.9504624685217953, 'recall': 0.8779240273631101, 'f1-score': 0.9101794471089076, 'support': 1852.0}, 'weighted avg': {'precision': 0.9700030074513174, 'recall': 0.9708423326133909, 'f1-score': 0.9695668974385266, 'support': 1852.0}} | 0.0292 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9967514889009204, 'recall': 0.9978319783197832, 'f1-score': 0.9972914409534128, 'support': 1845.0}, 'HI': {'precision': 0.2, 'recall': 0.14285714285714285, 'f1-score': 0.16666666666666666, 'support': 7.0}, 'accuracy': 0.9946004319654428, 'macro avg': {'precision': 0.5983757444504602, 'recall': 0.5703445605884631, 'f1-score': 0.5819790538100398, 'support': 1852.0}, 'weighted avg': {'precision': 0.9937400091912517, 'recall': 0.9946004319654428, 'f1-score': 0.9941519304674478, 'support': 1852.0}} | 0.0054 | {'~PL': {'precision': 0.9867986798679867, 'recall': 0.9955604883462819, 'f1-score': 0.9911602209944751, 'support': 1802.0}, 'PL': {'precision': 0.7647058823529411, 'recall': 0.52, 'f1-score': 0.6190476190476191, 'support': 50.0}, 'accuracy': 0.9827213822894169, 'macro avg': {'precision': 0.8757522811104639, 'recall': 0.7577802441731409, 'f1-score': 0.805103920021047, 'support': 1852.0}, 'weighted avg': {'precision': 0.9808026540171486, 'recall': 0.9827213822894169, 'f1-score': 0.9811139844408343, 'support': 1852.0}} | 0.0173 | {'~TI': {'precision': 0.9820928931169558, 'recall': 0.9777158774373259, 'f1-score': 0.9798994974874372, 'support': 1795.0}, 'TI': {'precision': 0.38461538461538464, 'recall': 0.43859649122807015, 'f1-score': 0.4098360655737705, 'support': 57.0}, 'accuracy': 0.9611231101511879, 'macro avg': {'precision': 0.6833541388661702, 'recall': 0.708156184332698, 'f1-score': 0.6948677815306039, 'support': 1852.0}, 'weighted avg': {'precision': 0.9637040065162056, 'recall': 0.9611231101511879, 'f1-score': 0.9623543486650403, 'support': 1852.0}} | 0.0389 | {'~PC': {'precision': 0.9869777536625067, 'recall': 0.9961664841182913, 'f1-score': 0.9915508312891796, 'support': 1826.0}, 'PC': {'precision': 0.2222222222222222, 'recall': 0.07692307692307693, 'f1-score': 0.11428571428571428, 'support': 26.0}, 'accuracy': 0.9832613390928726, 'macro avg': {'precision': 0.6045999879423645, 'recall': 0.5365447805206841, 'f1-score': 0.552918272787447, 'support': 1852.0}, 'weighted avg': {'precision': 0.9762414449057858, 'recall': 0.9832613390928726, 'f1-score': 0.9792350143118091, 'support': 1852.0}} | 0.0167 | {'f1-score': 0.6958576463844464, 'hamming': 0.018291036717062635} |
| 0.0005 | 26.0 | 2548 | 0.1435 | {'~LIN': {'precision': 0.991887506760411, 'recall': 0.9994550408719346, 'f1-score': 0.995656894679696, 'support': 1835.0}, 'LIN': {'precision': 0.6666666666666666, 'recall': 0.11764705882352941, 'f1-score': 0.2, 'support': 17.0}, 'accuracy': 0.9913606911447084, 'macro avg': {'precision': 0.8292770867135388, 'recall': 0.558551049847732, 'f1-score': 0.597828447339848, 'support': 1852.0}, 'weighted avg': {'precision': 0.9889022182714295, 'recall': 0.9913606911447084, 'f1-score': 0.9883533486702172, 'support': 1852.0}} | 0.0086 | {'~SI': {'precision': 0.9835255354200988, 'recall': 0.9873208379272327, 'f1-score': 0.9854195323246218, 'support': 1814.0}, 'SI': {'precision': 0.25806451612903225, 'recall': 0.21052631578947367, 'f1-score': 0.2318840579710145, 'support': 38.0}, 'accuracy': 0.9713822894168467, 'macro avg': {'precision': 0.6207950257745656, 'recall': 0.5989235768583532, 'f1-score': 0.6086517951478181, 'support': 1852.0}, 'weighted avg': {'precision': 0.9686402661257896, 'recall': 0.9713822894168467, 'f1-score': 0.969958221295768, 'support': 1852.0}} | 0.0286 | {'~CL': {'precision': 0.9741784037558685, 'recall': 0.9934171154997008, 'f1-score': 0.9837037037037037, 'support': 1671.0}, 'CL': {'precision': 0.9256756756756757, 'recall': 0.7569060773480663, 'f1-score': 0.8328267477203647, 'support': 181.0}, 'accuracy': 0.9703023758099352, 'macro avg': {'precision': 0.9499270397157721, 'recall': 0.8751615964238835, 'f1-score': 0.9082652257120343, 'support': 1852.0}, 'weighted avg': {'precision': 0.9694381263355041, 'recall': 0.9703023758099352, 'f1-score': 0.968958169668615, 'support': 1852.0}} | 0.0297 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9967497291440953, 'recall': 0.997289972899729, 'f1-score': 0.9970197778379842, 'support': 1845.0}, 'HI': {'precision': 0.16666666666666666, 'recall': 0.14285714285714285, 'f1-score': 0.15384615384615385, 'support': 7.0}, 'accuracy': 0.994060475161987, 'macro avg': {'precision': 0.581708197905381, 'recall': 0.5700735578784359, 'f1-score': 0.575432965842069, 'support': 1852.0}, 'weighted avg': {'precision': 0.993612266164969, 'recall': 0.994060475161987, 'f1-score': 0.9938328364946026, 'support': 1852.0}} | 0.0059 | {'~PL': {'precision': 0.9867986798679867, 'recall': 0.9955604883462819, 'f1-score': 0.9911602209944751, 'support': 1802.0}, 'PL': {'precision': 0.7647058823529411, 'recall': 0.52, 'f1-score': 0.6190476190476191, 'support': 50.0}, 'accuracy': 0.9827213822894169, 'macro avg': {'precision': 0.8757522811104639, 'recall': 0.7577802441731409, 'f1-score': 0.805103920021047, 'support': 1852.0}, 'weighted avg': {'precision': 0.9808026540171486, 'recall': 0.9827213822894169, 'f1-score': 0.9811139844408343, 'support': 1852.0}} | 0.0173 | {'~TI': {'precision': 0.9820627802690582, 'recall': 0.9760445682451253, 'f1-score': 0.9790444258172674, 'support': 1795.0}, 'TI': {'precision': 0.36764705882352944, 'recall': 0.43859649122807015, 'f1-score': 0.4, 'support': 57.0}, 'accuracy': 0.9595032397408207, 'macro avg': {'precision': 0.6748549195462938, 'recall': 0.7073205297365978, 'f1-score': 0.6895222129086337, 'support': 1852.0}, 'weighted avg': {'precision': 0.963152577179212, 'recall': 0.9595032397408207, 'f1-score': 0.961222864115548, 'support': 1852.0}} | 0.0405 | {'~PC': {'precision': 0.9869777536625067, 'recall': 0.9961664841182913, 'f1-score': 0.9915508312891796, 'support': 1826.0}, 'PC': {'precision': 0.2222222222222222, 'recall': 0.07692307692307693, 'f1-score': 0.11428571428571428, 'support': 26.0}, 'accuracy': 0.9832613390928726, 'macro avg': {'precision': 0.6045999879423645, 'recall': 0.5365447805206841, 'f1-score': 0.552918272787447, 'support': 1852.0}, 'weighted avg': {'precision': 0.9762414449057858, 'recall': 0.9832613390928726, 'f1-score': 0.9792350143118091, 'support': 1852.0}} | 0.0167 | {'f1-score': 0.7006831832923643, 'hamming': 0.018763498920086392} |
| 0.0005 | 27.0 | 2646 | 0.1464 | {'~LIN': {'precision': 0.9918918918918919, 'recall': 1.0, 'f1-score': 0.9959294436906377, 'support': 1835.0}, 'LIN': {'precision': 1.0, 'recall': 0.11764705882352941, 'f1-score': 0.21052631578947367, 'support': 17.0}, 'accuracy': 0.9919006479481641, 'macro avg': {'precision': 0.9959459459459459, 'recall': 0.5588235294117647, 'f1-score': 0.6032278797400557, 'support': 1852.0}, 'weighted avg': {'precision': 0.9919663183702061, 'recall': 0.9919006479481641, 'f1-score': 0.9887200197304219, 'support': 1852.0}} | 0.0081 | {'~SI': {'precision': 0.9829763866007688, 'recall': 0.9867695700110254, 'f1-score': 0.984869325997249, 'support': 1814.0}, 'SI': {'precision': 0.22580645161290322, 'recall': 0.18421052631578946, 'f1-score': 0.2028985507246377, 'support': 38.0}, 'accuracy': 0.9703023758099352, 'macro avg': {'precision': 0.604391419106836, 'recall': 0.5854900481634074, 'f1-score': 0.5938839383609433, 'support': 1852.0}, 'weighted avg': {'precision': 0.9674405024055533, 'recall': 0.9703023758099352, 'f1-score': 0.9688245692691932, 'support': 1852.0}} | 0.0297 | {'~CL': {'precision': 0.9741784037558685, 'recall': 0.9934171154997008, 'f1-score': 0.9837037037037037, 'support': 1671.0}, 'CL': {'precision': 0.9256756756756757, 'recall': 0.7569060773480663, 'f1-score': 0.8328267477203647, 'support': 181.0}, 'accuracy': 0.9703023758099352, 'macro avg': {'precision': 0.9499270397157721, 'recall': 0.8751615964238835, 'f1-score': 0.9082652257120343, 'support': 1852.0}, 'weighted avg': {'precision': 0.9694381263355041, 'recall': 0.9703023758099352, 'f1-score': 0.968958169668615, 'support': 1852.0}} | 0.0297 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9967497291440953, 'recall': 0.997289972899729, 'f1-score': 0.9970197778379842, 'support': 1845.0}, 'HI': {'precision': 0.16666666666666666, 'recall': 0.14285714285714285, 'f1-score': 0.15384615384615385, 'support': 7.0}, 'accuracy': 0.994060475161987, 'macro avg': {'precision': 0.581708197905381, 'recall': 0.5700735578784359, 'f1-score': 0.575432965842069, 'support': 1852.0}, 'weighted avg': {'precision': 0.993612266164969, 'recall': 0.994060475161987, 'f1-score': 0.9938328364946026, 'support': 1852.0}} | 0.0059 | {'~PL': {'precision': 0.986827661909989, 'recall': 0.9977802441731409, 'f1-score': 0.9922737306843267, 'support': 1802.0}, 'PL': {'precision': 0.8666666666666667, 'recall': 0.52, 'f1-score': 0.65, 'support': 50.0}, 'accuracy': 0.9848812095032398, 'macro avg': {'precision': 0.9267471642883278, 'recall': 0.7588901220865705, 'f1-score': 0.8211368653421633, 'support': 1852.0}, 'weighted avg': {'precision': 0.9835835745654068, 'recall': 0.9848812095032398, 'f1-score': 0.9830330792079679, 'support': 1852.0}} | 0.0151 | {'~TI': {'precision': 0.9820728291316526, 'recall': 0.9766016713091922, 'f1-score': 0.9793296089385475, 'support': 1795.0}, 'TI': {'precision': 0.373134328358209, 'recall': 0.43859649122807015, 'f1-score': 0.4032258064516129, 'support': 57.0}, 'accuracy': 0.9600431965442765, 'macro avg': {'precision': 0.6776035787449308, 'recall': 0.7075990812686311, 'f1-score': 0.6912777076950802, 'support': 1852.0}, 'weighted avg': {'precision': 0.9633312014080639, 'recall': 0.9600431965442765, 'f1-score': 0.9615985523825241, 'support': 1852.0}} | 0.0400 | {'~PC': {'precision': 0.9869636067354699, 'recall': 0.9950711938663745, 'f1-score': 0.9910008181074448, 'support': 1826.0}, 'PC': {'precision': 0.18181818181818182, 'recall': 0.07692307692307693, 'f1-score': 0.10810810810810811, 'support': 26.0}, 'accuracy': 0.9821814254859611, 'macro avg': {'precision': 0.5843908942768259, 'recall': 0.5359971353947257, 'f1-score': 0.5495544631077764, 'support': 1852.0}, 'weighted avg': {'precision': 0.9756602692366311, 'recall': 0.9821814254859611, 'f1-score': 0.9786059960448191, 'support': 1852.0}} | 0.0178 | {'f1-score': 0.7013152090475177, 'hamming': 0.01862850971922246} |
| 0.0005 | 28.0 | 2744 | 0.1470 | {'~LIN': {'precision': 0.9918918918918919, 'recall': 1.0, 'f1-score': 0.9959294436906377, 'support': 1835.0}, 'LIN': {'precision': 1.0, 'recall': 0.11764705882352941, 'f1-score': 0.21052631578947367, 'support': 17.0}, 'accuracy': 0.9919006479481641, 'macro avg': {'precision': 0.9959459459459459, 'recall': 0.5588235294117647, 'f1-score': 0.6032278797400557, 'support': 1852.0}, 'weighted avg': {'precision': 0.9919663183702061, 'recall': 0.9919006479481641, 'f1-score': 0.9887200197304219, 'support': 1852.0}} | 0.0081 | {'~SI': {'precision': 0.9829763866007688, 'recall': 0.9867695700110254, 'f1-score': 0.984869325997249, 'support': 1814.0}, 'SI': {'precision': 0.22580645161290322, 'recall': 0.18421052631578946, 'f1-score': 0.2028985507246377, 'support': 38.0}, 'accuracy': 0.9703023758099352, 'macro avg': {'precision': 0.604391419106836, 'recall': 0.5854900481634074, 'f1-score': 0.5938839383609433, 'support': 1852.0}, 'weighted avg': {'precision': 0.9674405024055533, 'recall': 0.9703023758099352, 'f1-score': 0.9688245692691932, 'support': 1852.0}} | 0.0297 | {'~CL': {'precision': 0.9736070381231672, 'recall': 0.9934171154997008, 'f1-score': 0.9834123222748815, 'support': 1671.0}, 'CL': {'precision': 0.9251700680272109, 'recall': 0.7513812154696132, 'f1-score': 0.8292682926829268, 'support': 181.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.9493885530751891, 'recall': 0.872399165484657, 'f1-score': 0.9063403074789042, 'support': 1852.0}, 'weighted avg': {'precision': 0.9688731873740485, 'recall': 0.9697624190064795, 'f1-score': 0.9683474900091451, 'support': 1852.0}} | 0.0302 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9967497291440953, 'recall': 0.997289972899729, 'f1-score': 0.9970197778379842, 'support': 1845.0}, 'HI': {'precision': 0.16666666666666666, 'recall': 0.14285714285714285, 'f1-score': 0.15384615384615385, 'support': 7.0}, 'accuracy': 0.994060475161987, 'macro avg': {'precision': 0.581708197905381, 'recall': 0.5700735578784359, 'f1-score': 0.575432965842069, 'support': 1852.0}, 'weighted avg': {'precision': 0.993612266164969, 'recall': 0.994060475161987, 'f1-score': 0.9938328364946026, 'support': 1852.0}} | 0.0059 | {'~PL': {'precision': 0.986286341195831, 'recall': 0.9977802441731409, 'f1-score': 0.992, 'support': 1802.0}, 'PL': {'precision': 0.8620689655172413, 'recall': 0.5, 'f1-score': 0.6329113924050633, 'support': 50.0}, 'accuracy': 0.984341252699784, 'macro avg': {'precision': 0.9241776533565362, 'recall': 0.7488901220865705, 'f1-score': 0.8124556962025317, 'support': 1852.0}, 'weighted avg': {'precision': 0.9829327403405775, 'recall': 0.984341252699784, 'f1-score': 0.9823053831642836, 'support': 1852.0}} | 0.0157 | {'~TI': {'precision': 0.9820627802690582, 'recall': 0.9760445682451253, 'f1-score': 0.9790444258172674, 'support': 1795.0}, 'TI': {'precision': 0.36764705882352944, 'recall': 0.43859649122807015, 'f1-score': 0.4, 'support': 57.0}, 'accuracy': 0.9595032397408207, 'macro avg': {'precision': 0.6748549195462938, 'recall': 0.7073205297365978, 'f1-score': 0.6895222129086337, 'support': 1852.0}, 'weighted avg': {'precision': 0.963152577179212, 'recall': 0.9595032397408207, 'f1-score': 0.961222864115548, 'support': 1852.0}} | 0.0405 | {'~PC': {'precision': 0.9880239520958084, 'recall': 0.9939759036144579, 'f1-score': 0.990990990990991, 'support': 1826.0}, 'PC': {'precision': 0.26666666666666666, 'recall': 0.15384615384615385, 'f1-score': 0.1951219512195122, 'support': 26.0}, 'accuracy': 0.9821814254859611, 'macro avg': {'precision': 0.6273453093812376, 'recall': 0.5739110287303059, 'f1-score': 0.5930564711052516, 'support': 1852.0}, 'weighted avg': {'precision': 0.9778969059720731, 'recall': 0.9821814254859611, 'f1-score': 0.9798178835211969, 'support': 1852.0}} | 0.0178 | {'f1-score': 0.705207762277301, 'hamming': 0.01883099352051836} |
| 0.0005 | 29.0 | 2842 | 0.1483 | {'~LIN': {'precision': 0.9918831168831169, 'recall': 0.9989100817438692, 'f1-score': 0.995384197664947, 'support': 1835.0}, 'LIN': {'precision': 0.5, 'recall': 0.11764705882352941, 'f1-score': 0.19047619047619047, 'support': 17.0}, 'accuracy': 0.9908207343412527, 'macro avg': {'precision': 0.7459415584415585, 'recall': 0.5582785702836993, 'f1-score': 0.5929301940705687, 'support': 1852.0}, 'weighted avg': {'precision': 0.9873679910801942, 'recall': 0.9908207343412527, 'f1-score': 0.9879957332361085, 'support': 1852.0}} | 0.0092 | {'~SI': {'precision': 0.9835164835164835, 'recall': 0.9867695700110254, 'f1-score': 0.9851403412217942, 'support': 1814.0}, 'SI': {'precision': 0.25, 'recall': 0.21052631578947367, 'f1-score': 0.22857142857142856, 'support': 38.0}, 'accuracy': 0.9708423326133909, 'macro avg': {'precision': 0.6167582417582418, 'recall': 0.5986479429002496, 'f1-score': 0.6068558848966114, 'support': 1852.0}, 'weighted avg': {'precision': 0.9684659293190611, 'recall': 0.9708423326133909, 'f1-score': 0.969616789018385, 'support': 1852.0}} | 0.0292 | {'~CL': {'precision': 0.9736070381231672, 'recall': 0.9934171154997008, 'f1-score': 0.9834123222748815, 'support': 1671.0}, 'CL': {'precision': 0.9251700680272109, 'recall': 0.7513812154696132, 'f1-score': 0.8292682926829268, 'support': 181.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.9493885530751891, 'recall': 0.872399165484657, 'f1-score': 0.9063403074789042, 'support': 1852.0}, 'weighted avg': {'precision': 0.9688731873740485, 'recall': 0.9697624190064795, 'f1-score': 0.9683474900091451, 'support': 1852.0}} | 0.0302 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9967497291440953, 'recall': 0.997289972899729, 'f1-score': 0.9970197778379842, 'support': 1845.0}, 'HI': {'precision': 0.16666666666666666, 'recall': 0.14285714285714285, 'f1-score': 0.15384615384615385, 'support': 7.0}, 'accuracy': 0.994060475161987, 'macro avg': {'precision': 0.581708197905381, 'recall': 0.5700735578784359, 'f1-score': 0.575432965842069, 'support': 1852.0}, 'weighted avg': {'precision': 0.993612266164969, 'recall': 0.994060475161987, 'f1-score': 0.9938328364946026, 'support': 1852.0}} | 0.0059 | {'~PL': {'precision': 0.986286341195831, 'recall': 0.9977802441731409, 'f1-score': 0.992, 'support': 1802.0}, 'PL': {'precision': 0.8620689655172413, 'recall': 0.5, 'f1-score': 0.6329113924050633, 'support': 50.0}, 'accuracy': 0.984341252699784, 'macro avg': {'precision': 0.9241776533565362, 'recall': 0.7488901220865705, 'f1-score': 0.8124556962025317, 'support': 1852.0}, 'weighted avg': {'precision': 0.9829327403405775, 'recall': 0.984341252699784, 'f1-score': 0.9823053831642836, 'support': 1852.0}} | 0.0157 | {'~TI': {'precision': 0.9820527201346047, 'recall': 0.9754874651810584, 'f1-score': 0.9787590832867524, 'support': 1795.0}, 'TI': {'precision': 0.36231884057971014, 'recall': 0.43859649122807015, 'f1-score': 0.3968253968253968, 'support': 57.0}, 'accuracy': 0.958963282937365, 'macro avg': {'precision': 0.6721857803571574, 'recall': 0.7070419782045643, 'f1-score': 0.6877922400560746, 'support': 1852.0}, 'weighted avg': {'precision': 0.9629788372325372, 'recall': 0.958963282937365, 'f1-score': 0.9608485972563543, 'support': 1852.0}} | 0.0410 | {'~PC': {'precision': 0.9874863982589771, 'recall': 0.9939759036144579, 'f1-score': 0.9907205240174672, 'support': 1826.0}, 'PC': {'precision': 0.21428571428571427, 'recall': 0.11538461538461539, 'f1-score': 0.15, 'support': 26.0}, 'accuracy': 0.9816414686825053, 'macro avg': {'precision': 0.6008860562723457, 'recall': 0.5546802594995366, 'f1-score': 0.5703602620087336, 'support': 1852.0}, 'weighted avg': {'precision': 0.9766315290455296, 'recall': 0.9816414686825053, 'f1-score': 0.9789177520820169, 'support': 1852.0}} | 0.0184 | {'f1-score': 0.7024887721419389, 'hamming': 0.019033477321814256} |
| 0.0005 | 30.0 | 2940 | 0.1476 | {'~LIN': {'precision': 0.9918831168831169, 'recall': 0.9989100817438692, 'f1-score': 0.995384197664947, 'support': 1835.0}, 'LIN': {'precision': 0.5, 'recall': 0.11764705882352941, 'f1-score': 0.19047619047619047, 'support': 17.0}, 'accuracy': 0.9908207343412527, 'macro avg': {'precision': 0.7459415584415585, 'recall': 0.5582785702836993, 'f1-score': 0.5929301940705687, 'support': 1852.0}, 'weighted avg': {'precision': 0.9873679910801942, 'recall': 0.9908207343412527, 'f1-score': 0.9879957332361085, 'support': 1852.0}} | 0.0092 | {'~SI': {'precision': 0.9829950630828305, 'recall': 0.9878721058434399, 'f1-score': 0.9854275501787187, 'support': 1814.0}, 'SI': {'precision': 0.2413793103448276, 'recall': 0.18421052631578946, 'f1-score': 0.208955223880597, 'support': 38.0}, 'accuracy': 0.9713822894168467, 'macro avg': {'precision': 0.612187186713829, 'recall': 0.5860413160796146, 'f1-score': 0.5971913870296579, 'support': 1852.0}, 'weighted avg': {'precision': 0.9677783251756793, 'recall': 0.9713822894168467, 'f1-score': 0.9694956125980877, 'support': 1852.0}} | 0.0286 | {'~CL': {'precision': 0.9736070381231672, 'recall': 0.9934171154997008, 'f1-score': 0.9834123222748815, 'support': 1671.0}, 'CL': {'precision': 0.9251700680272109, 'recall': 0.7513812154696132, 'f1-score': 0.8292682926829268, 'support': 181.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.9493885530751891, 'recall': 0.872399165484657, 'f1-score': 0.9063403074789042, 'support': 1852.0}, 'weighted avg': {'precision': 0.9688731873740485, 'recall': 0.9697624190064795, 'f1-score': 0.9683474900091451, 'support': 1852.0}} | 0.0302 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9967497291440953, 'recall': 0.997289972899729, 'f1-score': 0.9970197778379842, 'support': 1845.0}, 'HI': {'precision': 0.16666666666666666, 'recall': 0.14285714285714285, 'f1-score': 0.15384615384615385, 'support': 7.0}, 'accuracy': 0.994060475161987, 'macro avg': {'precision': 0.581708197905381, 'recall': 0.5700735578784359, 'f1-score': 0.575432965842069, 'support': 1852.0}, 'weighted avg': {'precision': 0.993612266164969, 'recall': 0.994060475161987, 'f1-score': 0.9938328364946026, 'support': 1852.0}} | 0.0059 | {'~PL': {'precision': 0.9868131868131869, 'recall': 0.9966703662597114, 'f1-score': 0.9917172832689122, 'support': 1802.0}, 'PL': {'precision': 0.8125, 'recall': 0.52, 'f1-score': 0.6341463414634146, 'support': 50.0}, 'accuracy': 0.9838012958963283, 'macro avg': {'precision': 0.8996565934065934, 'recall': 0.7583351831298557, 'f1-score': 0.8129318123661634, 'support': 1852.0}, 'weighted avg': {'precision': 0.9821071072555955, 'recall': 0.9838012958963283, 'f1-score': 0.9820636401316147, 'support': 1852.0}} | 0.0162 | {'~TI': {'precision': 0.9820627802690582, 'recall': 0.9760445682451253, 'f1-score': 0.9790444258172674, 'support': 1795.0}, 'TI': {'precision': 0.36764705882352944, 'recall': 0.43859649122807015, 'f1-score': 0.4, 'support': 57.0}, 'accuracy': 0.9595032397408207, 'macro avg': {'precision': 0.6748549195462938, 'recall': 0.7073205297365978, 'f1-score': 0.6895222129086337, 'support': 1852.0}, 'weighted avg': {'precision': 0.963152577179212, 'recall': 0.9595032397408207, 'f1-score': 0.961222864115548, 'support': 1852.0}} | 0.0405 | {'~PC': {'precision': 0.9869777536625067, 'recall': 0.9961664841182913, 'f1-score': 0.9915508312891796, 'support': 1826.0}, 'PC': {'precision': 0.2222222222222222, 'recall': 0.07692307692307693, 'f1-score': 0.11428571428571428, 'support': 26.0}, 'accuracy': 0.9832613390928726, 'macro avg': {'precision': 0.6045999879423645, 'recall': 0.5365447805206841, 'f1-score': 0.552918272787447, 'support': 1852.0}, 'weighted avg': {'precision': 0.9762414449057858, 'recall': 0.9832613390928726, 'f1-score': 0.9792350143118091, 'support': 1852.0}} | 0.0167 | {'f1-score': 0.6993762223829327, 'hamming': 0.018763498920086392} |
| 0.0003 | 31.0 | 3038 | 0.1495 | {'~LIN': {'precision': 0.991887506760411, 'recall': 0.9994550408719346, 'f1-score': 0.995656894679696, 'support': 1835.0}, 'LIN': {'precision': 0.6666666666666666, 'recall': 0.11764705882352941, 'f1-score': 0.2, 'support': 17.0}, 'accuracy': 0.9913606911447084, 'macro avg': {'precision': 0.8292770867135388, 'recall': 0.558551049847732, 'f1-score': 0.597828447339848, 'support': 1852.0}, 'weighted avg': {'precision': 0.9889022182714295, 'recall': 0.9913606911447084, 'f1-score': 0.9883533486702172, 'support': 1852.0}} | 0.0086 | {'~SI': {'precision': 0.9829857299670691, 'recall': 0.9873208379272327, 'f1-score': 0.9851485148514851, 'support': 1814.0}, 'SI': {'precision': 0.23333333333333334, 'recall': 0.18421052631578946, 'f1-score': 0.20588235294117646, 'support': 38.0}, 'accuracy': 0.9708423326133909, 'macro avg': {'precision': 0.6081595316502012, 'recall': 0.585765682121511, 'f1-score': 0.5955154338963308, 'support': 1852.0}, 'weighted avg': {'precision': 0.9676040933190766, 'recall': 0.9708423326133909, 'f1-score': 0.9691592523500856, 'support': 1852.0}} | 0.0292 | {'~CL': {'precision': 0.9736070381231672, 'recall': 0.9934171154997008, 'f1-score': 0.9834123222748815, 'support': 1671.0}, 'CL': {'precision': 0.9251700680272109, 'recall': 0.7513812154696132, 'f1-score': 0.8292682926829268, 'support': 181.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.9493885530751891, 'recall': 0.872399165484657, 'f1-score': 0.9063403074789042, 'support': 1852.0}, 'weighted avg': {'precision': 0.9688731873740485, 'recall': 0.9697624190064795, 'f1-score': 0.9683474900091451, 'support': 1852.0}} | 0.0302 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9967497291440953, 'recall': 0.997289972899729, 'f1-score': 0.9970197778379842, 'support': 1845.0}, 'HI': {'precision': 0.16666666666666666, 'recall': 0.14285714285714285, 'f1-score': 0.15384615384615385, 'support': 7.0}, 'accuracy': 0.994060475161987, 'macro avg': {'precision': 0.581708197905381, 'recall': 0.5700735578784359, 'f1-score': 0.575432965842069, 'support': 1852.0}, 'weighted avg': {'precision': 0.993612266164969, 'recall': 0.994060475161987, 'f1-score': 0.9938328364946026, 'support': 1852.0}} | 0.0059 | {'~PL': {'precision': 0.986827661909989, 'recall': 0.9977802441731409, 'f1-score': 0.9922737306843267, 'support': 1802.0}, 'PL': {'precision': 0.8666666666666667, 'recall': 0.52, 'f1-score': 0.65, 'support': 50.0}, 'accuracy': 0.9848812095032398, 'macro avg': {'precision': 0.9267471642883278, 'recall': 0.7588901220865705, 'f1-score': 0.8211368653421633, 'support': 1852.0}, 'weighted avg': {'precision': 0.9835835745654068, 'recall': 0.9848812095032398, 'f1-score': 0.9830330792079679, 'support': 1852.0}} | 0.0151 | {'~TI': {'precision': 0.9820527201346047, 'recall': 0.9754874651810584, 'f1-score': 0.9787590832867524, 'support': 1795.0}, 'TI': {'precision': 0.36231884057971014, 'recall': 0.43859649122807015, 'f1-score': 0.3968253968253968, 'support': 57.0}, 'accuracy': 0.958963282937365, 'macro avg': {'precision': 0.6721857803571574, 'recall': 0.7070419782045643, 'f1-score': 0.6877922400560746, 'support': 1852.0}, 'weighted avg': {'precision': 0.9629788372325372, 'recall': 0.958963282937365, 'f1-score': 0.9608485972563543, 'support': 1852.0}} | 0.0410 | {'~PC': {'precision': 0.9869565217391304, 'recall': 0.9945235487404163, 'f1-score': 0.9907255864702673, 'support': 1826.0}, 'PC': {'precision': 0.16666666666666666, 'recall': 0.07692307692307693, 'f1-score': 0.10526315789473684, 'support': 26.0}, 'accuracy': 0.9816414686825053, 'macro avg': {'precision': 0.5768115942028985, 'recall': 0.5357233128317466, 'f1-score': 0.547994372182502, 'support': 1852.0}, 'weighted avg': {'precision': 0.9754405734497761, 'recall': 0.9816414686825053, 'f1-score': 0.9782946884449089, 'support': 1852.0}} | 0.0184 | {'f1-score': 0.6999729073397388, 'hamming': 0.018898488120950324} |
| 0.0003 | 32.0 | 3136 | 0.1487 | {'~LIN': {'precision': 0.9918918918918919, 'recall': 1.0, 'f1-score': 0.9959294436906377, 'support': 1835.0}, 'LIN': {'precision': 1.0, 'recall': 0.11764705882352941, 'f1-score': 0.21052631578947367, 'support': 17.0}, 'accuracy': 0.9919006479481641, 'macro avg': {'precision': 0.9959459459459459, 'recall': 0.5588235294117647, 'f1-score': 0.6032278797400557, 'support': 1852.0}, 'weighted avg': {'precision': 0.9919663183702061, 'recall': 0.9919006479481641, 'f1-score': 0.9887200197304219, 'support': 1852.0}} | 0.0081 | {'~SI': {'precision': 0.9829950630828305, 'recall': 0.9878721058434399, 'f1-score': 0.9854275501787187, 'support': 1814.0}, 'SI': {'precision': 0.2413793103448276, 'recall': 0.18421052631578946, 'f1-score': 0.208955223880597, 'support': 38.0}, 'accuracy': 0.9713822894168467, 'macro avg': {'precision': 0.612187186713829, 'recall': 0.5860413160796146, 'f1-score': 0.5971913870296579, 'support': 1852.0}, 'weighted avg': {'precision': 0.9677783251756793, 'recall': 0.9713822894168467, 'f1-score': 0.9694956125980877, 'support': 1852.0}} | 0.0286 | {'~CL': {'precision': 0.9736070381231672, 'recall': 0.9934171154997008, 'f1-score': 0.9834123222748815, 'support': 1671.0}, 'CL': {'precision': 0.9251700680272109, 'recall': 0.7513812154696132, 'f1-score': 0.8292682926829268, 'support': 181.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.9493885530751891, 'recall': 0.872399165484657, 'f1-score': 0.9063403074789042, 'support': 1852.0}, 'weighted avg': {'precision': 0.9688731873740485, 'recall': 0.9697624190064795, 'f1-score': 0.9683474900091451, 'support': 1852.0}} | 0.0302 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9967497291440953, 'recall': 0.997289972899729, 'f1-score': 0.9970197778379842, 'support': 1845.0}, 'HI': {'precision': 0.16666666666666666, 'recall': 0.14285714285714285, 'f1-score': 0.15384615384615385, 'support': 7.0}, 'accuracy': 0.994060475161987, 'macro avg': {'precision': 0.581708197905381, 'recall': 0.5700735578784359, 'f1-score': 0.575432965842069, 'support': 1852.0}, 'weighted avg': {'precision': 0.993612266164969, 'recall': 0.994060475161987, 'f1-score': 0.9938328364946026, 'support': 1852.0}} | 0.0059 | {'~PL': {'precision': 0.9868131868131869, 'recall': 0.9966703662597114, 'f1-score': 0.9917172832689122, 'support': 1802.0}, 'PL': {'precision': 0.8125, 'recall': 0.52, 'f1-score': 0.6341463414634146, 'support': 50.0}, 'accuracy': 0.9838012958963283, 'macro avg': {'precision': 0.8996565934065934, 'recall': 0.7583351831298557, 'f1-score': 0.8129318123661634, 'support': 1852.0}, 'weighted avg': {'precision': 0.9821071072555955, 'recall': 0.9838012958963283, 'f1-score': 0.9820636401316147, 'support': 1852.0}} | 0.0162 | {'~TI': {'precision': 0.9820527201346047, 'recall': 0.9754874651810584, 'f1-score': 0.9787590832867524, 'support': 1795.0}, 'TI': {'precision': 0.36231884057971014, 'recall': 0.43859649122807015, 'f1-score': 0.3968253968253968, 'support': 57.0}, 'accuracy': 0.958963282937365, 'macro avg': {'precision': 0.6721857803571574, 'recall': 0.7070419782045643, 'f1-score': 0.6877922400560746, 'support': 1852.0}, 'weighted avg': {'precision': 0.9629788372325372, 'recall': 0.958963282937365, 'f1-score': 0.9608485972563543, 'support': 1852.0}} | 0.0410 | {'~PC': {'precision': 0.9869777536625067, 'recall': 0.9961664841182913, 'f1-score': 0.9915508312891796, 'support': 1826.0}, 'PC': {'precision': 0.2222222222222222, 'recall': 0.07692307692307693, 'f1-score': 0.11428571428571428, 'support': 26.0}, 'accuracy': 0.9832613390928726, 'macro avg': {'precision': 0.6045999879423645, 'recall': 0.5365447805206841, 'f1-score': 0.552918272787447, 'support': 1852.0}, 'weighted avg': {'precision': 0.9762414449057858, 'recall': 0.9832613390928726, 'f1-score': 0.9792350143118091, 'support': 1852.0}} | 0.0167 | {'f1-score': 0.7004471864850488, 'hamming': 0.018696004319654427} |
| 0.0003 | 33.0 | 3234 | 0.1509 | {'~LIN': {'precision': 0.9918918918918919, 'recall': 1.0, 'f1-score': 0.9959294436906377, 'support': 1835.0}, 'LIN': {'precision': 1.0, 'recall': 0.11764705882352941, 'f1-score': 0.21052631578947367, 'support': 17.0}, 'accuracy': 0.9919006479481641, 'macro avg': {'precision': 0.9959459459459459, 'recall': 0.5588235294117647, 'f1-score': 0.6032278797400557, 'support': 1852.0}, 'weighted avg': {'precision': 0.9919663183702061, 'recall': 0.9919006479481641, 'f1-score': 0.9887200197304219, 'support': 1852.0}} | 0.0081 | {'~SI': {'precision': 0.9829950630828305, 'recall': 0.9878721058434399, 'f1-score': 0.9854275501787187, 'support': 1814.0}, 'SI': {'precision': 0.2413793103448276, 'recall': 0.18421052631578946, 'f1-score': 0.208955223880597, 'support': 38.0}, 'accuracy': 0.9713822894168467, 'macro avg': {'precision': 0.612187186713829, 'recall': 0.5860413160796146, 'f1-score': 0.5971913870296579, 'support': 1852.0}, 'weighted avg': {'precision': 0.9677783251756793, 'recall': 0.9713822894168467, 'f1-score': 0.9694956125980877, 'support': 1852.0}} | 0.0286 | {'~CL': {'precision': 0.9736070381231672, 'recall': 0.9934171154997008, 'f1-score': 0.9834123222748815, 'support': 1671.0}, 'CL': {'precision': 0.9251700680272109, 'recall': 0.7513812154696132, 'f1-score': 0.8292682926829268, 'support': 181.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.9493885530751891, 'recall': 0.872399165484657, 'f1-score': 0.9063403074789042, 'support': 1852.0}, 'weighted avg': {'precision': 0.9688731873740485, 'recall': 0.9697624190064795, 'f1-score': 0.9683474900091451, 'support': 1852.0}} | 0.0302 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9967514889009204, 'recall': 0.9978319783197832, 'f1-score': 0.9972914409534128, 'support': 1845.0}, 'HI': {'precision': 0.2, 'recall': 0.14285714285714285, 'f1-score': 0.16666666666666666, 'support': 7.0}, 'accuracy': 0.9946004319654428, 'macro avg': {'precision': 0.5983757444504602, 'recall': 0.5703445605884631, 'f1-score': 0.5819790538100398, 'support': 1852.0}, 'weighted avg': {'precision': 0.9937400091912517, 'recall': 0.9946004319654428, 'f1-score': 0.9941519304674478, 'support': 1852.0}} | 0.0054 | {'~PL': {'precision': 0.986827661909989, 'recall': 0.9977802441731409, 'f1-score': 0.9922737306843267, 'support': 1802.0}, 'PL': {'precision': 0.8666666666666667, 'recall': 0.52, 'f1-score': 0.65, 'support': 50.0}, 'accuracy': 0.9848812095032398, 'macro avg': {'precision': 0.9267471642883278, 'recall': 0.7588901220865705, 'f1-score': 0.8211368653421633, 'support': 1852.0}, 'weighted avg': {'precision': 0.9835835745654068, 'recall': 0.9848812095032398, 'f1-score': 0.9830330792079679, 'support': 1852.0}} | 0.0151 | {'~TI': {'precision': 0.9820527201346047, 'recall': 0.9754874651810584, 'f1-score': 0.9787590832867524, 'support': 1795.0}, 'TI': {'precision': 0.36231884057971014, 'recall': 0.43859649122807015, 'f1-score': 0.3968253968253968, 'support': 57.0}, 'accuracy': 0.958963282937365, 'macro avg': {'precision': 0.6721857803571574, 'recall': 0.7070419782045643, 'f1-score': 0.6877922400560746, 'support': 1852.0}, 'weighted avg': {'precision': 0.9629788372325372, 'recall': 0.958963282937365, 'f1-score': 0.9608485972563543, 'support': 1852.0}} | 0.0410 | {'~PC': {'precision': 0.9869706840390879, 'recall': 0.9956188389923329, 'f1-score': 0.9912758996728462, 'support': 1826.0}, 'PC': {'precision': 0.2, 'recall': 0.07692307692307693, 'f1-score': 0.1111111111111111, 'support': 26.0}, 'accuracy': 0.9827213822894169, 'macro avg': {'precision': 0.593485342019544, 'recall': 0.536270957957705, 'f1-score': 0.5511935053919786, 'support': 1852.0}, 'weighted avg': {'precision': 0.975922499489943, 'recall': 0.9827213822894169, 'f1-score': 0.9789193745634482, 'support': 1852.0}} | 0.0173 | {'f1-score': 0.7020754831786116, 'hamming': 0.018561015118790495} |
| 0.0003 | 34.0 | 3332 | 0.1544 | {'~LIN': {'precision': 0.9918918918918919, 'recall': 1.0, 'f1-score': 0.9959294436906377, 'support': 1835.0}, 'LIN': {'precision': 1.0, 'recall': 0.11764705882352941, 'f1-score': 0.21052631578947367, 'support': 17.0}, 'accuracy': 0.9919006479481641, 'macro avg': {'precision': 0.9959459459459459, 'recall': 0.5588235294117647, 'f1-score': 0.6032278797400557, 'support': 1852.0}, 'weighted avg': {'precision': 0.9919663183702061, 'recall': 0.9919006479481641, 'f1-score': 0.9887200197304219, 'support': 1852.0}} | 0.0081 | {'~SI': {'precision': 0.9840220385674932, 'recall': 0.9845644983461963, 'f1-score': 0.9842931937172775, 'support': 1814.0}, 'SI': {'precision': 0.24324324324324326, 'recall': 0.23684210526315788, 'f1-score': 0.24, 'support': 38.0}, 'accuracy': 0.9692224622030238, 'macro avg': {'precision': 0.6136326409053682, 'recall': 0.6107033018046771, 'f1-score': 0.6121465968586388, 'support': 1852.0}, 'weighted avg': {'precision': 0.9688224736526327, 'recall': 0.9692224622030238, 'f1-score': 0.969021519116167, 'support': 1852.0}} | 0.0308 | {'~CL': {'precision': 0.9736070381231672, 'recall': 0.9934171154997008, 'f1-score': 0.9834123222748815, 'support': 1671.0}, 'CL': {'precision': 0.9251700680272109, 'recall': 0.7513812154696132, 'f1-score': 0.8292682926829268, 'support': 181.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.9493885530751891, 'recall': 0.872399165484657, 'f1-score': 0.9063403074789042, 'support': 1852.0}, 'weighted avg': {'precision': 0.9688731873740485, 'recall': 0.9697624190064795, 'f1-score': 0.9683474900091451, 'support': 1852.0}} | 0.0302 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9967479674796748, 'recall': 0.9967479674796748, 'f1-score': 0.9967479674796748, 'support': 1845.0}, 'HI': {'precision': 0.14285714285714285, 'recall': 0.14285714285714285, 'f1-score': 0.14285714285714285, 'support': 7.0}, 'accuracy': 0.9935205183585313, 'macro avg': {'precision': 0.5698025551684088, 'recall': 0.5698025551684088, 'f1-score': 0.5698025551684088, 'support': 1852.0}, 'weighted avg': {'precision': 0.9935205183585313, 'recall': 0.9935205183585313, 'f1-score': 0.9935205183585313, 'support': 1852.0}} | 0.0065 | {'~PL': {'precision': 0.9846659364731654, 'recall': 0.9977802441731409, 'f1-score': 0.9911797133406836, 'support': 1802.0}, 'PL': {'precision': 0.8461538461538461, 'recall': 0.44, 'f1-score': 0.5789473684210527, 'support': 50.0}, 'accuracy': 0.9827213822894169, 'macro avg': {'precision': 0.9154098913135058, 'recall': 0.7188901220865704, 'f1-score': 0.7850635408808682, 'support': 1852.0}, 'weighted avg': {'precision': 0.9809264091967259, 'recall': 0.9827213822894169, 'f1-score': 0.9800503303784905, 'support': 1852.0}} | 0.0173 | {'~TI': {'precision': 0.9820426487093153, 'recall': 0.9749303621169917, 'f1-score': 0.9784735812133072, 'support': 1795.0}, 'TI': {'precision': 0.35714285714285715, 'recall': 0.43859649122807015, 'f1-score': 0.3937007874015748, 'support': 57.0}, 'accuracy': 0.9584233261339092, 'macro avg': {'precision': 0.6695927529260862, 'recall': 0.7067634266725309, 'f1-score': 0.686087184307441, 'support': 1852.0}, 'weighted avg': {'precision': 0.9628097717550561, 'recall': 0.9584233261339092, 'f1-score': 0.9604757144491232, 'support': 1852.0}} | 0.0416 | {'~PC': {'precision': 0.9880174291938998, 'recall': 0.9934282584884995, 'f1-score': 0.9907154560349536, 'support': 1826.0}, 'PC': {'precision': 0.25, 'recall': 0.15384615384615385, 'f1-score': 0.19047619047619047, 'support': 26.0}, 'accuracy': 0.9816414686825053, 'macro avg': {'precision': 0.6190087145969498, 'recall': 0.5736372061673267, 'f1-score': 0.590595823255572, 'support': 1852.0}, 'weighted avg': {'precision': 0.9776564933628838, 'recall': 0.9816414686825053, 'f1-score': 0.979480995503351, 'support': 1852.0}} | 0.0184 | {'f1-score': 0.7026258142837384, 'hamming': 0.01943844492440605} |
| 0.0003 | 35.0 | 3430 | 0.1513 | {'~LIN': {'precision': 0.9918918918918919, 'recall': 1.0, 'f1-score': 0.9959294436906377, 'support': 1835.0}, 'LIN': {'precision': 1.0, 'recall': 0.11764705882352941, 'f1-score': 0.21052631578947367, 'support': 17.0}, 'accuracy': 0.9919006479481641, 'macro avg': {'precision': 0.9959459459459459, 'recall': 0.5588235294117647, 'f1-score': 0.6032278797400557, 'support': 1852.0}, 'weighted avg': {'precision': 0.9919663183702061, 'recall': 0.9919006479481641, 'f1-score': 0.9887200197304219, 'support': 1852.0}} | 0.0081 | {'~SI': {'precision': 0.9829670329670329, 'recall': 0.9862183020948181, 'f1-score': 0.9845899834892681, 'support': 1814.0}, 'SI': {'precision': 0.21875, 'recall': 0.18421052631578946, 'f1-score': 0.2, 'support': 38.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.6008585164835165, 'recall': 0.5852144142053037, 'f1-score': 0.5922949917446341, 'support': 1852.0}, 'weighted avg': {'precision': 0.9672865538888756, 'recall': 0.9697624190064795, 'f1-score': 0.9684914849079548, 'support': 1852.0}} | 0.0302 | {'~CL': {'precision': 0.9736070381231672, 'recall': 0.9934171154997008, 'f1-score': 0.9834123222748815, 'support': 1671.0}, 'CL': {'precision': 0.9251700680272109, 'recall': 0.7513812154696132, 'f1-score': 0.8292682926829268, 'support': 181.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.9493885530751891, 'recall': 0.872399165484657, 'f1-score': 0.9063403074789042, 'support': 1852.0}, 'weighted avg': {'precision': 0.9688731873740485, 'recall': 0.9697624190064795, 'f1-score': 0.9683474900091451, 'support': 1852.0}} | 0.0302 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9967479674796748, 'recall': 0.9967479674796748, 'f1-score': 0.9967479674796748, 'support': 1845.0}, 'HI': {'precision': 0.14285714285714285, 'recall': 0.14285714285714285, 'f1-score': 0.14285714285714285, 'support': 7.0}, 'accuracy': 0.9935205183585313, 'macro avg': {'precision': 0.5698025551684088, 'recall': 0.5698025551684088, 'f1-score': 0.5698025551684088, 'support': 1852.0}, 'weighted avg': {'precision': 0.9935205183585313, 'recall': 0.9935205183585313, 'f1-score': 0.9935205183585313, 'support': 1852.0}} | 0.0065 | {'~PL': {'precision': 0.9868204283360791, 'recall': 0.9972253052164262, 'f1-score': 0.991995583770356, 'support': 1802.0}, 'PL': {'precision': 0.8387096774193549, 'recall': 0.52, 'f1-score': 0.6419753086419753, 'support': 50.0}, 'accuracy': 0.984341252699784, 'macro avg': {'precision': 0.9127650528777169, 'recall': 0.7586126526082131, 'f1-score': 0.8169854462061656, 'support': 1852.0}, 'weighted avg': {'precision': 0.9828217579549581, 'recall': 0.984341252699784, 'f1-score': 0.9825457923252054, 'support': 1852.0}} | 0.0157 | {'~TI': {'precision': 0.9820627802690582, 'recall': 0.9760445682451253, 'f1-score': 0.9790444258172674, 'support': 1795.0}, 'TI': {'precision': 0.36764705882352944, 'recall': 0.43859649122807015, 'f1-score': 0.4, 'support': 57.0}, 'accuracy': 0.9595032397408207, 'macro avg': {'precision': 0.6748549195462938, 'recall': 0.7073205297365978, 'f1-score': 0.6895222129086337, 'support': 1852.0}, 'weighted avg': {'precision': 0.963152577179212, 'recall': 0.9595032397408207, 'f1-score': 0.961222864115548, 'support': 1852.0}} | 0.0405 | {'~PC': {'precision': 0.9874863982589771, 'recall': 0.9939759036144579, 'f1-score': 0.9907205240174672, 'support': 1826.0}, 'PC': {'precision': 0.21428571428571427, 'recall': 0.11538461538461539, 'f1-score': 0.15, 'support': 26.0}, 'accuracy': 0.9816414686825053, 'macro avg': {'precision': 0.6008860562723457, 'recall': 0.5546802594995366, 'f1-score': 0.5703602620087336, 'support': 1852.0}, 'weighted avg': {'precision': 0.9766315290455296, 'recall': 0.9816414686825053, 'f1-score': 0.9789177520820169, 'support': 1852.0}} | 0.0184 | {'f1-score': 0.7020345352294443, 'hamming': 0.019033477321814256} |
| 0.0003 | 36.0 | 3528 | 0.1543 | {'~LIN': {'precision': 0.9918918918918919, 'recall': 1.0, 'f1-score': 0.9959294436906377, 'support': 1835.0}, 'LIN': {'precision': 1.0, 'recall': 0.11764705882352941, 'f1-score': 0.21052631578947367, 'support': 17.0}, 'accuracy': 0.9919006479481641, 'macro avg': {'precision': 0.9959459459459459, 'recall': 0.5588235294117647, 'f1-score': 0.6032278797400557, 'support': 1852.0}, 'weighted avg': {'precision': 0.9919663183702061, 'recall': 0.9919006479481641, 'f1-score': 0.9887200197304219, 'support': 1852.0}} | 0.0081 | {'~SI': {'precision': 0.9835074216602528, 'recall': 0.9862183020948181, 'f1-score': 0.9848609964216901, 'support': 1814.0}, 'SI': {'precision': 0.24242424242424243, 'recall': 0.21052631578947367, 'f1-score': 0.22535211267605634, 'support': 38.0}, 'accuracy': 0.9703023758099352, 'macro avg': {'precision': 0.6129658320422476, 'recall': 0.5983723089421459, 'f1-score': 0.6051065545488732, 'support': 1852.0}, 'weighted avg': {'precision': 0.9683016112871597, 'recall': 0.9703023758099352, 'f1-score': 0.9692771208372764, 'support': 1852.0}} | 0.0297 | {'~CL': {'precision': 0.9730363423212193, 'recall': 0.9934171154997008, 'f1-score': 0.983121113414273, 'support': 1671.0}, 'CL': {'precision': 0.9246575342465754, 'recall': 0.7458563535911602, 'f1-score': 0.8256880733944955, 'support': 181.0}, 'accuracy': 0.9692224622030238, 'macro avg': {'precision': 0.9488469382838973, 'recall': 0.8696367345454306, 'f1-score': 0.9044045934043843, 'support': 1852.0}, 'weighted avg': {'precision': 0.9683081758733194, 'recall': 0.9692224622030238, 'f1-score': 0.9677348389846944, 'support': 1852.0}} | 0.0308 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9962080173347779, 'recall': 0.9967479674796748, 'f1-score': 0.9964779192630724, 'support': 1845.0}, 'HI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0}, 'accuracy': 0.9929805615550756, 'macro avg': {'precision': 0.49810400866738896, 'recall': 0.4983739837398374, 'f1-score': 0.4982389596315362, 'support': 1852.0}, 'weighted avg': {'precision': 0.9924426522584585, 'recall': 0.9929805615550756, 'f1-score': 0.9927115340390759, 'support': 1852.0}} | 0.0070 | {'~PL': {'precision': 0.9852054794520548, 'recall': 0.9977802441731409, 'f1-score': 0.9914529914529915, 'support': 1802.0}, 'PL': {'precision': 0.8518518518518519, 'recall': 0.46, 'f1-score': 0.5974025974025974, 'support': 50.0}, 'accuracy': 0.9832613390928726, 'macro avg': {'precision': 0.9185286656519533, 'recall': 0.7288901220865704, 'f1-score': 0.7944277944277944, 'support': 1852.0}, 'weighted avg': {'precision': 0.9816052195276432, 'recall': 0.9832613390928726, 'f1-score': 0.9808144818943956, 'support': 1852.0}} | 0.0167 | {'~TI': {'precision': 0.9815126050420168, 'recall': 0.9760445682451253, 'f1-score': 0.9787709497206704, 'support': 1795.0}, 'TI': {'precision': 0.3582089552238806, 'recall': 0.42105263157894735, 'f1-score': 0.3870967741935484, 'support': 57.0}, 'accuracy': 0.958963282937365, 'macro avg': {'precision': 0.6698607801329487, 'recall': 0.6985485999120363, 'f1-score': 0.6829338619571094, 'support': 1852.0}, 'weighted avg': {'precision': 0.9623288534007458, 'recall': 0.958963282937365, 'f1-score': 0.9605606754198895, 'support': 1852.0}} | 0.0410 | {'~PC': {'precision': 0.9880304678998912, 'recall': 0.9945235487404163, 'f1-score': 0.9912663755458515, 'support': 1826.0}, 'PC': {'precision': 0.2857142857142857, 'recall': 0.15384615384615385, 'f1-score': 0.2, 'support': 26.0}, 'accuracy': 0.9827213822894169, 'macro avg': {'precision': 0.6368723768070885, 'recall': 0.574184851293285, 'f1-score': 0.5956331877729257, 'support': 1852.0}, 'weighted avg': {'precision': 0.9781707374804389, 'recall': 0.9827213822894169, 'f1-score': 0.9801578843124864, 'support': 1852.0}} | 0.0173 | {'f1-score': 0.6939644322578371, 'hamming': 0.019168466522678188} |
| 0.0003 | 37.0 | 3626 | 0.1534 | {'~LIN': {'precision': 0.9918918918918919, 'recall': 1.0, 'f1-score': 0.9959294436906377, 'support': 1835.0}, 'LIN': {'precision': 1.0, 'recall': 0.11764705882352941, 'f1-score': 0.21052631578947367, 'support': 17.0}, 'accuracy': 0.9919006479481641, 'macro avg': {'precision': 0.9959459459459459, 'recall': 0.5588235294117647, 'f1-score': 0.6032278797400557, 'support': 1852.0}, 'weighted avg': {'precision': 0.9919663183702061, 'recall': 0.9919006479481641, 'f1-score': 0.9887200197304219, 'support': 1852.0}} | 0.0081 | {'~SI': {'precision': 0.9829763866007688, 'recall': 0.9867695700110254, 'f1-score': 0.984869325997249, 'support': 1814.0}, 'SI': {'precision': 0.22580645161290322, 'recall': 0.18421052631578946, 'f1-score': 0.2028985507246377, 'support': 38.0}, 'accuracy': 0.9703023758099352, 'macro avg': {'precision': 0.604391419106836, 'recall': 0.5854900481634074, 'f1-score': 0.5938839383609433, 'support': 1852.0}, 'weighted avg': {'precision': 0.9674405024055533, 'recall': 0.9703023758099352, 'f1-score': 0.9688245692691932, 'support': 1852.0}} | 0.0297 | {'~CL': {'precision': 0.9736070381231672, 'recall': 0.9934171154997008, 'f1-score': 0.9834123222748815, 'support': 1671.0}, 'CL': {'precision': 0.9251700680272109, 'recall': 0.7513812154696132, 'f1-score': 0.8292682926829268, 'support': 181.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.9493885530751891, 'recall': 0.872399165484657, 'f1-score': 0.9063403074789042, 'support': 1852.0}, 'weighted avg': {'precision': 0.9688731873740485, 'recall': 0.9697624190064795, 'f1-score': 0.9683474900091451, 'support': 1852.0}} | 0.0302 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9967479674796748, 'recall': 0.9967479674796748, 'f1-score': 0.9967479674796748, 'support': 1845.0}, 'HI': {'precision': 0.14285714285714285, 'recall': 0.14285714285714285, 'f1-score': 0.14285714285714285, 'support': 7.0}, 'accuracy': 0.9935205183585313, 'macro avg': {'precision': 0.5698025551684088, 'recall': 0.5698025551684088, 'f1-score': 0.5698025551684088, 'support': 1852.0}, 'weighted avg': {'precision': 0.9935205183585313, 'recall': 0.9935205183585313, 'f1-score': 0.9935205183585313, 'support': 1852.0}} | 0.0065 | {'~PL': {'precision': 0.986827661909989, 'recall': 0.9977802441731409, 'f1-score': 0.9922737306843267, 'support': 1802.0}, 'PL': {'precision': 0.8666666666666667, 'recall': 0.52, 'f1-score': 0.65, 'support': 50.0}, 'accuracy': 0.9848812095032398, 'macro avg': {'precision': 0.9267471642883278, 'recall': 0.7588901220865705, 'f1-score': 0.8211368653421633, 'support': 1852.0}, 'weighted avg': {'precision': 0.9835835745654068, 'recall': 0.9848812095032398, 'f1-score': 0.9830330792079679, 'support': 1852.0}} | 0.0151 | {'~TI': {'precision': 0.9815229563269877, 'recall': 0.9766016713091922, 'f1-score': 0.979056129572745, 'support': 1795.0}, 'TI': {'precision': 0.36363636363636365, 'recall': 0.42105263157894735, 'f1-score': 0.3902439024390244, 'support': 57.0}, 'accuracy': 0.9595032397408207, 'macro avg': {'precision': 0.6725796599816757, 'recall': 0.6988271514440698, 'f1-score': 0.6846500160058847, 'support': 1852.0}, 'weighted avg': {'precision': 0.9625059283662072, 'recall': 0.9595032397408207, 'f1-score': 0.9609339389968152, 'support': 1852.0}} | 0.0405 | {'~PC': {'precision': 0.9869565217391304, 'recall': 0.9945235487404163, 'f1-score': 0.9907255864702673, 'support': 1826.0}, 'PC': {'precision': 0.16666666666666666, 'recall': 0.07692307692307693, 'f1-score': 0.10526315789473684, 'support': 26.0}, 'accuracy': 0.9816414686825053, 'macro avg': {'precision': 0.5768115942028985, 'recall': 0.5357233128317466, 'f1-score': 0.547994372182502, 'support': 1852.0}, 'weighted avg': {'precision': 0.9754405734497761, 'recall': 0.9816414686825053, 'f1-score': 0.9782946884449089, 'support': 1852.0}} | 0.0184 | {'f1-score': 0.6993473201073601, 'hamming': 0.018898488120950324} |
| 0.0003 | 38.0 | 3724 | 0.1530 | {'~LIN': {'precision': 0.9918918918918919, 'recall': 1.0, 'f1-score': 0.9959294436906377, 'support': 1835.0}, 'LIN': {'precision': 1.0, 'recall': 0.11764705882352941, 'f1-score': 0.21052631578947367, 'support': 17.0}, 'accuracy': 0.9919006479481641, 'macro avg': {'precision': 0.9959459459459459, 'recall': 0.5588235294117647, 'f1-score': 0.6032278797400557, 'support': 1852.0}, 'weighted avg': {'precision': 0.9919663183702061, 'recall': 0.9919006479481641, 'f1-score': 0.9887200197304219, 'support': 1852.0}} | 0.0081 | {'~SI': {'precision': 0.983013698630137, 'recall': 0.9889746416758545, 'f1-score': 0.9859851607584501, 'support': 1814.0}, 'SI': {'precision': 0.25925925925925924, 'recall': 0.18421052631578946, 'f1-score': 0.2153846153846154, 'support': 38.0}, 'accuracy': 0.9724622030237581, 'macro avg': {'precision': 0.6211364789446981, 'recall': 0.5865925839958219, 'f1-score': 0.6006848880715328, 'support': 1852.0}, 'weighted avg': {'precision': 0.9681634455544926, 'recall': 0.9724622030237581, 'f1-score': 0.970173702484041, 'support': 1852.0}} | 0.0275 | {'~CL': {'precision': 0.9736070381231672, 'recall': 0.9934171154997008, 'f1-score': 0.9834123222748815, 'support': 1671.0}, 'CL': {'precision': 0.9251700680272109, 'recall': 0.7513812154696132, 'f1-score': 0.8292682926829268, 'support': 181.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.9493885530751891, 'recall': 0.872399165484657, 'f1-score': 0.9063403074789042, 'support': 1852.0}, 'weighted avg': {'precision': 0.9688731873740485, 'recall': 0.9697624190064795, 'f1-score': 0.9683474900091451, 'support': 1852.0}} | 0.0302 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9967514889009204, 'recall': 0.9978319783197832, 'f1-score': 0.9972914409534128, 'support': 1845.0}, 'HI': {'precision': 0.2, 'recall': 0.14285714285714285, 'f1-score': 0.16666666666666666, 'support': 7.0}, 'accuracy': 0.9946004319654428, 'macro avg': {'precision': 0.5983757444504602, 'recall': 0.5703445605884631, 'f1-score': 0.5819790538100398, 'support': 1852.0}, 'weighted avg': {'precision': 0.9937400091912517, 'recall': 0.9946004319654428, 'f1-score': 0.9941519304674478, 'support': 1852.0}} | 0.0054 | {'~PL': {'precision': 0.9868204283360791, 'recall': 0.9972253052164262, 'f1-score': 0.991995583770356, 'support': 1802.0}, 'PL': {'precision': 0.8387096774193549, 'recall': 0.52, 'f1-score': 0.6419753086419753, 'support': 50.0}, 'accuracy': 0.984341252699784, 'macro avg': {'precision': 0.9127650528777169, 'recall': 0.7586126526082131, 'f1-score': 0.8169854462061656, 'support': 1852.0}, 'weighted avg': {'precision': 0.9828217579549581, 'recall': 0.984341252699784, 'f1-score': 0.9825457923252054, 'support': 1852.0}} | 0.0157 | {'~TI': {'precision': 0.9815332960268607, 'recall': 0.977158774373259, 'f1-score': 0.9793411501954216, 'support': 1795.0}, 'TI': {'precision': 0.36923076923076925, 'recall': 0.42105263157894735, 'f1-score': 0.39344262295081966, 'support': 57.0}, 'accuracy': 0.9600431965442765, 'macro avg': {'precision': 0.6753820326288149, 'recall': 0.6991057029761032, 'f1-score': 0.6863918865731207, 'support': 1852.0}, 'weighted avg': {'precision': 0.96268813186521, 'recall': 0.9600431965442765, 'f1-score': 0.961308636127958, 'support': 1852.0}} | 0.0400 | {'~PC': {'precision': 0.9869777536625067, 'recall': 0.9961664841182913, 'f1-score': 0.9915508312891796, 'support': 1826.0}, 'PC': {'precision': 0.2222222222222222, 'recall': 0.07692307692307693, 'f1-score': 0.11428571428571428, 'support': 26.0}, 'accuracy': 0.9832613390928726, 'macro avg': {'precision': 0.6045999879423645, 'recall': 0.5365447805206841, 'f1-score': 0.552918272787447, 'support': 1852.0}, 'weighted avg': {'precision': 0.9762414449057858, 'recall': 0.9832613390928726, 'f1-score': 0.9792350143118091, 'support': 1852.0}} | 0.0167 | {'f1-score': 0.7020337951559105, 'hamming': 0.018291036717062635} |
| 0.0003 | 39.0 | 3822 | 0.1573 | {'~LIN': {'precision': 0.9918918918918919, 'recall': 1.0, 'f1-score': 0.9959294436906377, 'support': 1835.0}, 'LIN': {'precision': 1.0, 'recall': 0.11764705882352941, 'f1-score': 0.21052631578947367, 'support': 17.0}, 'accuracy': 0.9919006479481641, 'macro avg': {'precision': 0.9959459459459459, 'recall': 0.5588235294117647, 'f1-score': 0.6032278797400557, 'support': 1852.0}, 'weighted avg': {'precision': 0.9919663183702061, 'recall': 0.9919006479481641, 'f1-score': 0.9887200197304219, 'support': 1852.0}} | 0.0081 | {'~SI': {'precision': 0.9845644983461963, 'recall': 0.9845644983461963, 'f1-score': 0.9845644983461963, 'support': 1814.0}, 'SI': {'precision': 0.2631578947368421, 'recall': 0.2631578947368421, 'f1-score': 0.2631578947368421, 'support': 38.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.6238611965415192, 'recall': 0.6238611965415192, 'f1-score': 0.6238611965415192, 'support': 1852.0}, 'weighted avg': {'precision': 0.9697624190064795, 'recall': 0.9697624190064795, 'f1-score': 0.9697624190064795, 'support': 1852.0}} | 0.0302 | {'~CL': {'precision': 0.9736070381231672, 'recall': 0.9934171154997008, 'f1-score': 0.9834123222748815, 'support': 1671.0}, 'CL': {'precision': 0.9251700680272109, 'recall': 0.7513812154696132, 'f1-score': 0.8292682926829268, 'support': 181.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.9493885530751891, 'recall': 0.872399165484657, 'f1-score': 0.9063403074789042, 'support': 1852.0}, 'weighted avg': {'precision': 0.9688731873740485, 'recall': 0.9697624190064795, 'f1-score': 0.9683474900091451, 'support': 1852.0}} | 0.0302 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9967479674796748, 'recall': 0.9967479674796748, 'f1-score': 0.9967479674796748, 'support': 1845.0}, 'HI': {'precision': 0.14285714285714285, 'recall': 0.14285714285714285, 'f1-score': 0.14285714285714285, 'support': 7.0}, 'accuracy': 0.9935205183585313, 'macro avg': {'precision': 0.5698025551684088, 'recall': 0.5698025551684088, 'f1-score': 0.5698025551684088, 'support': 1852.0}, 'weighted avg': {'precision': 0.9935205183585313, 'recall': 0.9935205183585313, 'f1-score': 0.9935205183585313, 'support': 1852.0}} | 0.0065 | {'~PL': {'precision': 0.9830601092896175, 'recall': 0.9983351831298557, 'f1-score': 0.9906387665198237, 'support': 1802.0}, 'PL': {'precision': 0.8636363636363636, 'recall': 0.38, 'f1-score': 0.5277777777777778, 'support': 50.0}, 'accuracy': 0.9816414686825053, 'macro avg': {'precision': 0.9233482364629906, 'recall': 0.6891675915649278, 'f1-score': 0.7592082721488007, 'support': 1852.0}, 'weighted avg': {'precision': 0.9798359260916355, 'recall': 0.9816414686825053, 'f1-score': 0.9781425195235481, 'support': 1852.0}} | 0.0184 | {'~TI': {'precision': 0.9814814814814815, 'recall': 0.9743732590529248, 'f1-score': 0.977914453452614, 'support': 1795.0}, 'TI': {'precision': 0.34285714285714286, 'recall': 0.42105263157894735, 'f1-score': 0.3779527559055118, 'support': 57.0}, 'accuracy': 0.9573434125269978, 'macro avg': {'precision': 0.6621693121693122, 'recall': 0.6977129453159361, 'f1-score': 0.6779336046790629, 'support': 1852.0}, 'weighted avg': {'precision': 0.961826196761402, 'recall': 0.9573434125269978, 'f1-score': 0.95944910962962, 'support': 1852.0}} | 0.0427 | {'~PC': {'precision': 0.9880174291938998, 'recall': 0.9934282584884995, 'f1-score': 0.9907154560349536, 'support': 1826.0}, 'PC': {'precision': 0.25, 'recall': 0.15384615384615385, 'f1-score': 0.19047619047619047, 'support': 26.0}, 'accuracy': 0.9816414686825053, 'macro avg': {'precision': 0.6190087145969498, 'recall': 0.5736372061673267, 'f1-score': 0.590595823255572, 'support': 1852.0}, 'weighted avg': {'precision': 0.9776564933628838, 'recall': 0.9816414686825053, 'f1-score': 0.979480995503351, 'support': 1852.0}} | 0.0184 | {'f1-score': 0.6998390331990427, 'hamming': 0.019640928725701945} |
| 0.0003 | 40.0 | 3920 | 0.1534 | {'~LIN': {'precision': 0.9918831168831169, 'recall': 0.9989100817438692, 'f1-score': 0.995384197664947, 'support': 1835.0}, 'LIN': {'precision': 0.5, 'recall': 0.11764705882352941, 'f1-score': 0.19047619047619047, 'support': 17.0}, 'accuracy': 0.9908207343412527, 'macro avg': {'precision': 0.7459415584415585, 'recall': 0.5582785702836993, 'f1-score': 0.5929301940705687, 'support': 1852.0}, 'weighted avg': {'precision': 0.9873679910801942, 'recall': 0.9908207343412527, 'f1-score': 0.9879957332361085, 'support': 1852.0}} | 0.0092 | {'~SI': {'precision': 0.982957669048928, 'recall': 0.9856670341786108, 'f1-score': 0.9843104872006606, 'support': 1814.0}, 'SI': {'precision': 0.21212121212121213, 'recall': 0.18421052631578946, 'f1-score': 0.19718309859154928, 'support': 38.0}, 'accuracy': 0.9692224622030238, 'macro avg': {'precision': 0.59753944058507, 'recall': 0.5849387802472001, 'f1-score': 0.590746792896105, 'support': 1852.0}, 'weighted avg': {'precision': 0.9671413702566746, 'recall': 0.9692224622030238, 'f1-score': 0.9681599252313592, 'support': 1852.0}} | 0.0308 | {'~CL': {'precision': 0.9747504403992954, 'recall': 0.9934171154997008, 'f1-score': 0.983995257854179, 'support': 1671.0}, 'CL': {'precision': 0.9261744966442953, 'recall': 0.7624309392265194, 'f1-score': 0.8363636363636363, 'support': 181.0}, 'accuracy': 0.9708423326133909, 'macro avg': {'precision': 0.9504624685217953, 'recall': 0.8779240273631101, 'f1-score': 0.9101794471089076, 'support': 1852.0}, 'weighted avg': {'precision': 0.9700030074513174, 'recall': 0.9708423326133909, 'f1-score': 0.9695668974385266, 'support': 1852.0}} | 0.0292 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9967479674796748, 'recall': 0.9967479674796748, 'f1-score': 0.9967479674796748, 'support': 1845.0}, 'HI': {'precision': 0.14285714285714285, 'recall': 0.14285714285714285, 'f1-score': 0.14285714285714285, 'support': 7.0}, 'accuracy': 0.9935205183585313, 'macro avg': {'precision': 0.5698025551684088, 'recall': 0.5698025551684088, 'f1-score': 0.5698025551684088, 'support': 1852.0}, 'weighted avg': {'precision': 0.9935205183585313, 'recall': 0.9935205183585313, 'f1-score': 0.9935205183585313, 'support': 1852.0}} | 0.0065 | {'~PL': {'precision': 0.9868131868131869, 'recall': 0.9966703662597114, 'f1-score': 0.9917172832689122, 'support': 1802.0}, 'PL': {'precision': 0.8125, 'recall': 0.52, 'f1-score': 0.6341463414634146, 'support': 50.0}, 'accuracy': 0.9838012958963283, 'macro avg': {'precision': 0.8996565934065934, 'recall': 0.7583351831298557, 'f1-score': 0.8129318123661634, 'support': 1852.0}, 'weighted avg': {'precision': 0.9821071072555955, 'recall': 0.9838012958963283, 'f1-score': 0.9820636401316147, 'support': 1852.0}} | 0.0162 | {'~TI': {'precision': 0.9820527201346047, 'recall': 0.9754874651810584, 'f1-score': 0.9787590832867524, 'support': 1795.0}, 'TI': {'precision': 0.36231884057971014, 'recall': 0.43859649122807015, 'f1-score': 0.3968253968253968, 'support': 57.0}, 'accuracy': 0.958963282937365, 'macro avg': {'precision': 0.6721857803571574, 'recall': 0.7070419782045643, 'f1-score': 0.6877922400560746, 'support': 1852.0}, 'weighted avg': {'precision': 0.9629788372325372, 'recall': 0.958963282937365, 'f1-score': 0.9608485972563543, 'support': 1852.0}} | 0.0410 | {'~PC': {'precision': 0.9874795862819815, 'recall': 0.9934282584884995, 'f1-score': 0.9904449904449905, 'support': 1826.0}, 'PC': {'precision': 0.2, 'recall': 0.11538461538461539, 'f1-score': 0.14634146341463414, 'support': 26.0}, 'accuracy': 0.9811015118790497, 'macro avg': {'precision': 0.5937397931409908, 'recall': 0.5544064369365574, 'f1-score': 0.5683932269298123, 'support': 1852.0}, 'weighted avg': {'precision': 0.9764242573168996, 'recall': 0.9811015118790497, 'f1-score': 0.9785947249467241, 'support': 1852.0}} | 0.0189 | {'f1-score': 0.7000648618970075, 'hamming': 0.01930345572354212} |
| 0.0002 | 41.0 | 4018 | 0.1554 | {'~LIN': {'precision': 0.9918918918918919, 'recall': 1.0, 'f1-score': 0.9959294436906377, 'support': 1835.0}, 'LIN': {'precision': 1.0, 'recall': 0.11764705882352941, 'f1-score': 0.21052631578947367, 'support': 17.0}, 'accuracy': 0.9919006479481641, 'macro avg': {'precision': 0.9959459459459459, 'recall': 0.5588235294117647, 'f1-score': 0.6032278797400557, 'support': 1852.0}, 'weighted avg': {'precision': 0.9919663183702061, 'recall': 0.9919006479481641, 'f1-score': 0.9887200197304219, 'support': 1852.0}} | 0.0081 | {'~SI': {'precision': 0.9835074216602528, 'recall': 0.9862183020948181, 'f1-score': 0.9848609964216901, 'support': 1814.0}, 'SI': {'precision': 0.24242424242424243, 'recall': 0.21052631578947367, 'f1-score': 0.22535211267605634, 'support': 38.0}, 'accuracy': 0.9703023758099352, 'macro avg': {'precision': 0.6129658320422476, 'recall': 0.5983723089421459, 'f1-score': 0.6051065545488732, 'support': 1852.0}, 'weighted avg': {'precision': 0.9683016112871597, 'recall': 0.9703023758099352, 'f1-score': 0.9692771208372764, 'support': 1852.0}} | 0.0297 | {'~CL': {'precision': 0.9736070381231672, 'recall': 0.9934171154997008, 'f1-score': 0.9834123222748815, 'support': 1671.0}, 'CL': {'precision': 0.9251700680272109, 'recall': 0.7513812154696132, 'f1-score': 0.8292682926829268, 'support': 181.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.9493885530751891, 'recall': 0.872399165484657, 'f1-score': 0.9063403074789042, 'support': 1852.0}, 'weighted avg': {'precision': 0.9688731873740485, 'recall': 0.9697624190064795, 'f1-score': 0.9683474900091451, 'support': 1852.0}} | 0.0302 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9967479674796748, 'recall': 0.9967479674796748, 'f1-score': 0.9967479674796748, 'support': 1845.0}, 'HI': {'precision': 0.14285714285714285, 'recall': 0.14285714285714285, 'f1-score': 0.14285714285714285, 'support': 7.0}, 'accuracy': 0.9935205183585313, 'macro avg': {'precision': 0.5698025551684088, 'recall': 0.5698025551684088, 'f1-score': 0.5698025551684088, 'support': 1852.0}, 'weighted avg': {'precision': 0.9935205183585313, 'recall': 0.9935205183585313, 'f1-score': 0.9935205183585313, 'support': 1852.0}} | 0.0065 | {'~PL': {'precision': 0.9857377948436643, 'recall': 0.9972253052164262, 'f1-score': 0.991448275862069, 'support': 1802.0}, 'PL': {'precision': 0.8275862068965517, 'recall': 0.48, 'f1-score': 0.6075949367088608, 'support': 50.0}, 'accuracy': 0.9832613390928726, 'macro avg': {'precision': 0.9066620008701081, 'recall': 0.7386126526082131, 'f1-score': 0.7995216062854649, 'support': 1852.0}, 'weighted avg': {'precision': 0.9814680435491958, 'recall': 0.9832613390928726, 'f1-score': 0.9810850647618203, 'support': 1852.0}} | 0.0167 | {'~TI': {'precision': 0.9815332960268607, 'recall': 0.977158774373259, 'f1-score': 0.9793411501954216, 'support': 1795.0}, 'TI': {'precision': 0.36923076923076925, 'recall': 0.42105263157894735, 'f1-score': 0.39344262295081966, 'support': 57.0}, 'accuracy': 0.9600431965442765, 'macro avg': {'precision': 0.6753820326288149, 'recall': 0.6991057029761032, 'f1-score': 0.6863918865731207, 'support': 1852.0}, 'weighted avg': {'precision': 0.96268813186521, 'recall': 0.9600431965442765, 'f1-score': 0.961308636127958, 'support': 1852.0}} | 0.0400 | {'~PC': {'precision': 0.9880304678998912, 'recall': 0.9945235487404163, 'f1-score': 0.9912663755458515, 'support': 1826.0}, 'PC': {'precision': 0.2857142857142857, 'recall': 0.15384615384615385, 'f1-score': 0.2, 'support': 26.0}, 'accuracy': 0.9827213822894169, 'macro avg': {'precision': 0.6368723768070885, 'recall': 0.574184851293285, 'f1-score': 0.5956331877729257, 'support': 1852.0}, 'weighted avg': {'precision': 0.9781707374804389, 'recall': 0.9827213822894169, 'f1-score': 0.9801578843124864, 'support': 1852.0}} | 0.0173 | {'f1-score': 0.7042208255184714, 'hamming': 0.018898488120950324} |
| 0.0002 | 42.0 | 4116 | 0.1554 | {'~LIN': {'precision': 0.9918918918918919, 'recall': 1.0, 'f1-score': 0.9959294436906377, 'support': 1835.0}, 'LIN': {'precision': 1.0, 'recall': 0.11764705882352941, 'f1-score': 0.21052631578947367, 'support': 17.0}, 'accuracy': 0.9919006479481641, 'macro avg': {'precision': 0.9959459459459459, 'recall': 0.5588235294117647, 'f1-score': 0.6032278797400557, 'support': 1852.0}, 'weighted avg': {'precision': 0.9919663183702061, 'recall': 0.9919006479481641, 'f1-score': 0.9887200197304219, 'support': 1852.0}} | 0.0081 | {'~SI': {'precision': 0.9835074216602528, 'recall': 0.9862183020948181, 'f1-score': 0.9848609964216901, 'support': 1814.0}, 'SI': {'precision': 0.24242424242424243, 'recall': 0.21052631578947367, 'f1-score': 0.22535211267605634, 'support': 38.0}, 'accuracy': 0.9703023758099352, 'macro avg': {'precision': 0.6129658320422476, 'recall': 0.5983723089421459, 'f1-score': 0.6051065545488732, 'support': 1852.0}, 'weighted avg': {'precision': 0.9683016112871597, 'recall': 0.9703023758099352, 'f1-score': 0.9692771208372764, 'support': 1852.0}} | 0.0297 | {'~CL': {'precision': 0.9736070381231672, 'recall': 0.9934171154997008, 'f1-score': 0.9834123222748815, 'support': 1671.0}, 'CL': {'precision': 0.9251700680272109, 'recall': 0.7513812154696132, 'f1-score': 0.8292682926829268, 'support': 181.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.9493885530751891, 'recall': 0.872399165484657, 'f1-score': 0.9063403074789042, 'support': 1852.0}, 'weighted avg': {'precision': 0.9688731873740485, 'recall': 0.9697624190064795, 'f1-score': 0.9683474900091451, 'support': 1852.0}} | 0.0302 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9967479674796748, 'recall': 0.9967479674796748, 'f1-score': 0.9967479674796748, 'support': 1845.0}, 'HI': {'precision': 0.14285714285714285, 'recall': 0.14285714285714285, 'f1-score': 0.14285714285714285, 'support': 7.0}, 'accuracy': 0.9935205183585313, 'macro avg': {'precision': 0.5698025551684088, 'recall': 0.5698025551684088, 'f1-score': 0.5698025551684088, 'support': 1852.0}, 'weighted avg': {'precision': 0.9935205183585313, 'recall': 0.9935205183585313, 'f1-score': 0.9935205183585313, 'support': 1852.0}} | 0.0065 | {'~PL': {'precision': 0.9862788144895719, 'recall': 0.9972253052164262, 'f1-score': 0.9917218543046358, 'support': 1802.0}, 'PL': {'precision': 0.8333333333333334, 'recall': 0.5, 'f1-score': 0.625, 'support': 50.0}, 'accuracy': 0.9838012958963283, 'macro avg': {'precision': 0.9098060739114526, 'recall': 0.7486126526082131, 'f1-score': 0.8083609271523179, 'support': 1852.0}, 'weighted avg': {'precision': 0.9821496168341659, 'recall': 0.9838012958963283, 'f1-score': 0.9818211562942514, 'support': 1852.0}} | 0.0162 | {'~TI': {'precision': 0.9815229563269877, 'recall': 0.9766016713091922, 'f1-score': 0.979056129572745, 'support': 1795.0}, 'TI': {'precision': 0.36363636363636365, 'recall': 0.42105263157894735, 'f1-score': 0.3902439024390244, 'support': 57.0}, 'accuracy': 0.9595032397408207, 'macro avg': {'precision': 0.6725796599816757, 'recall': 0.6988271514440698, 'f1-score': 0.6846500160058847, 'support': 1852.0}, 'weighted avg': {'precision': 0.9625059283662072, 'recall': 0.9595032397408207, 'f1-score': 0.9609339389968152, 'support': 1852.0}} | 0.0405 | {'~PC': {'precision': 0.9880304678998912, 'recall': 0.9945235487404163, 'f1-score': 0.9912663755458515, 'support': 1826.0}, 'PC': {'precision': 0.2857142857142857, 'recall': 0.15384615384615385, 'f1-score': 0.2, 'support': 26.0}, 'accuracy': 0.9827213822894169, 'macro avg': {'precision': 0.6368723768070885, 'recall': 0.574184851293285, 'f1-score': 0.5956331877729257, 'support': 1852.0}, 'weighted avg': {'precision': 0.9781707374804389, 'recall': 0.9827213822894169, 'f1-score': 0.9801578843124864, 'support': 1852.0}} | 0.0173 | {'f1-score': 0.7051080068059237, 'hamming': 0.018898488120950324} |
| 0.0002 | 43.0 | 4214 | 0.1562 | {'~LIN': {'precision': 0.9918918918918919, 'recall': 1.0, 'f1-score': 0.9959294436906377, 'support': 1835.0}, 'LIN': {'precision': 1.0, 'recall': 0.11764705882352941, 'f1-score': 0.21052631578947367, 'support': 17.0}, 'accuracy': 0.9919006479481641, 'macro avg': {'precision': 0.9959459459459459, 'recall': 0.5588235294117647, 'f1-score': 0.6032278797400557, 'support': 1852.0}, 'weighted avg': {'precision': 0.9919663183702061, 'recall': 0.9919006479481641, 'f1-score': 0.9887200197304219, 'support': 1852.0}} | 0.0081 | {'~SI': {'precision': 0.9835074216602528, 'recall': 0.9862183020948181, 'f1-score': 0.9848609964216901, 'support': 1814.0}, 'SI': {'precision': 0.24242424242424243, 'recall': 0.21052631578947367, 'f1-score': 0.22535211267605634, 'support': 38.0}, 'accuracy': 0.9703023758099352, 'macro avg': {'precision': 0.6129658320422476, 'recall': 0.5983723089421459, 'f1-score': 0.6051065545488732, 'support': 1852.0}, 'weighted avg': {'precision': 0.9683016112871597, 'recall': 0.9703023758099352, 'f1-score': 0.9692771208372764, 'support': 1852.0}} | 0.0297 | {'~CL': {'precision': 0.9736070381231672, 'recall': 0.9934171154997008, 'f1-score': 0.9834123222748815, 'support': 1671.0}, 'CL': {'precision': 0.9251700680272109, 'recall': 0.7513812154696132, 'f1-score': 0.8292682926829268, 'support': 181.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.9493885530751891, 'recall': 0.872399165484657, 'f1-score': 0.9063403074789042, 'support': 1852.0}, 'weighted avg': {'precision': 0.9688731873740485, 'recall': 0.9697624190064795, 'f1-score': 0.9683474900091451, 'support': 1852.0}} | 0.0302 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9967479674796748, 'recall': 0.9967479674796748, 'f1-score': 0.9967479674796748, 'support': 1845.0}, 'HI': {'precision': 0.14285714285714285, 'recall': 0.14285714285714285, 'f1-score': 0.14285714285714285, 'support': 7.0}, 'accuracy': 0.9935205183585313, 'macro avg': {'precision': 0.5698025551684088, 'recall': 0.5698025551684088, 'f1-score': 0.5698025551684088, 'support': 1852.0}, 'weighted avg': {'precision': 0.9935205183585313, 'recall': 0.9935205183585313, 'f1-score': 0.9935205183585313, 'support': 1852.0}} | 0.0065 | {'~PL': {'precision': 0.9857377948436643, 'recall': 0.9972253052164262, 'f1-score': 0.991448275862069, 'support': 1802.0}, 'PL': {'precision': 0.8275862068965517, 'recall': 0.48, 'f1-score': 0.6075949367088608, 'support': 50.0}, 'accuracy': 0.9832613390928726, 'macro avg': {'precision': 0.9066620008701081, 'recall': 0.7386126526082131, 'f1-score': 0.7995216062854649, 'support': 1852.0}, 'weighted avg': {'precision': 0.9814680435491958, 'recall': 0.9832613390928726, 'f1-score': 0.9810850647618203, 'support': 1852.0}} | 0.0167 | {'~TI': {'precision': 0.9815229563269877, 'recall': 0.9766016713091922, 'f1-score': 0.979056129572745, 'support': 1795.0}, 'TI': {'precision': 0.36363636363636365, 'recall': 0.42105263157894735, 'f1-score': 0.3902439024390244, 'support': 57.0}, 'accuracy': 0.9595032397408207, 'macro avg': {'precision': 0.6725796599816757, 'recall': 0.6988271514440698, 'f1-score': 0.6846500160058847, 'support': 1852.0}, 'weighted avg': {'precision': 0.9625059283662072, 'recall': 0.9595032397408207, 'f1-score': 0.9609339389968152, 'support': 1852.0}} | 0.0405 | {'~PC': {'precision': 0.9869565217391304, 'recall': 0.9945235487404163, 'f1-score': 0.9907255864702673, 'support': 1826.0}, 'PC': {'precision': 0.16666666666666666, 'recall': 0.07692307692307693, 'f1-score': 0.10526315789473684, 'support': 26.0}, 'accuracy': 0.9816414686825053, 'macro avg': {'precision': 0.5768115942028985, 'recall': 0.5357233128317466, 'f1-score': 0.547994372182502, 'support': 1852.0}, 'weighted avg': {'precision': 0.9754405734497761, 'recall': 0.9816414686825053, 'f1-score': 0.9782946884449089, 'support': 1852.0}} | 0.0184 | {'f1-score': 0.698048239748764, 'hamming': 0.019100971922246218} |
| 0.0002 | 44.0 | 4312 | 0.1570 | {'~LIN': {'precision': 0.9918918918918919, 'recall': 1.0, 'f1-score': 0.9959294436906377, 'support': 1835.0}, 'LIN': {'precision': 1.0, 'recall': 0.11764705882352941, 'f1-score': 0.21052631578947367, 'support': 17.0}, 'accuracy': 0.9919006479481641, 'macro avg': {'precision': 0.9959459459459459, 'recall': 0.5588235294117647, 'f1-score': 0.6032278797400557, 'support': 1852.0}, 'weighted avg': {'precision': 0.9919663183702061, 'recall': 0.9919006479481641, 'f1-score': 0.9887200197304219, 'support': 1852.0}} | 0.0081 | {'~SI': {'precision': 0.9835074216602528, 'recall': 0.9862183020948181, 'f1-score': 0.9848609964216901, 'support': 1814.0}, 'SI': {'precision': 0.24242424242424243, 'recall': 0.21052631578947367, 'f1-score': 0.22535211267605634, 'support': 38.0}, 'accuracy': 0.9703023758099352, 'macro avg': {'precision': 0.6129658320422476, 'recall': 0.5983723089421459, 'f1-score': 0.6051065545488732, 'support': 1852.0}, 'weighted avg': {'precision': 0.9683016112871597, 'recall': 0.9703023758099352, 'f1-score': 0.9692771208372764, 'support': 1852.0}} | 0.0297 | {'~CL': {'precision': 0.9736070381231672, 'recall': 0.9934171154997008, 'f1-score': 0.9834123222748815, 'support': 1671.0}, 'CL': {'precision': 0.9251700680272109, 'recall': 0.7513812154696132, 'f1-score': 0.8292682926829268, 'support': 181.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.9493885530751891, 'recall': 0.872399165484657, 'f1-score': 0.9063403074789042, 'support': 1852.0}, 'weighted avg': {'precision': 0.9688731873740485, 'recall': 0.9697624190064795, 'f1-score': 0.9683474900091451, 'support': 1852.0}} | 0.0302 | {'~D': {'precision': 0.9983731019522777, 'recall': 0.999457111834962, 'f1-score': 0.9989148128052089, 'support': 1842.0}, 'D': {'precision': 0.875, 'recall': 0.7, 'f1-score': 0.7777777777777778, 'support': 10.0}, 'accuracy': 0.9978401727861771, 'macro avg': {'precision': 0.9366865509761388, 'recall': 0.849728555917481, 'f1-score': 0.8883462952914933, 'support': 1852.0}, 'weighted avg': {'precision': 0.997706940494652, 'recall': 0.9978401727861771, 'f1-score': 0.9977207683396181, 'support': 1852.0}} | 0.0022 | {'~HI': {'precision': 0.9962080173347779, 'recall': 0.9967479674796748, 'f1-score': 0.9964779192630724, 'support': 1845.0}, 'HI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0}, 'accuracy': 0.9929805615550756, 'macro avg': {'precision': 0.49810400866738896, 'recall': 0.4983739837398374, 'f1-score': 0.4982389596315362, 'support': 1852.0}, 'weighted avg': {'precision': 0.9924426522584585, 'recall': 0.9929805615550756, 'f1-score': 0.9927115340390759, 'support': 1852.0}} | 0.0070 | {'~PL': {'precision': 0.9841269841269841, 'recall': 0.9977802441731409, 'f1-score': 0.9909065858363185, 'support': 1802.0}, 'PL': {'precision': 0.84, 'recall': 0.42, 'f1-score': 0.56, 'support': 50.0}, 'accuracy': 0.9821814254859611, 'macro avg': {'precision': 0.912063492063492, 'recall': 0.7088901220865704, 'f1-score': 0.7754532929181592, 'support': 1852.0}, 'weighted avg': {'precision': 0.980235866844938, 'recall': 0.9821814254859611, 'f1-score': 0.9792730387025086, 'support': 1852.0}} | 0.0178 | {'~TI': {'precision': 0.9815229563269877, 'recall': 0.9766016713091922, 'f1-score': 0.979056129572745, 'support': 1795.0}, 'TI': {'precision': 0.36363636363636365, 'recall': 0.42105263157894735, 'f1-score': 0.3902439024390244, 'support': 57.0}, 'accuracy': 0.9595032397408207, 'macro avg': {'precision': 0.6725796599816757, 'recall': 0.6988271514440698, 'f1-score': 0.6846500160058847, 'support': 1852.0}, 'weighted avg': {'precision': 0.9625059283662072, 'recall': 0.9595032397408207, 'f1-score': 0.9609339389968152, 'support': 1852.0}} | 0.0405 | {'~PC': {'precision': 0.9874932028276238, 'recall': 0.9945235487404163, 'f1-score': 0.9909959072305593, 'support': 1826.0}, 'PC': {'precision': 0.23076923076923078, 'recall': 0.11538461538461539, 'f1-score': 0.15384615384615385, 'support': 26.0}, 'accuracy': 0.9821814254859611, 'macro avg': {'precision': 0.6091312167984273, 'recall': 0.5549540820625158, 'f1-score': 0.5724210305383566, 'support': 1852.0}, 'weighted avg': {'precision': 0.9768696481442986, 'recall': 0.9821814254859611, 'f1-score': 0.9792432649044284, 'support': 1852.0}} | 0.0178 | {'f1-score': 0.6917230420191579, 'hamming': 0.019168466522678188} |
| 0.0002 | 45.0 | 4410 | 0.1573 | {'~LIN': {'precision': 0.991887506760411, 'recall': 0.9994550408719346, 'f1-score': 0.995656894679696, 'support': 1835.0}, 'LIN': {'precision': 0.6666666666666666, 'recall': 0.11764705882352941, 'f1-score': 0.2, 'support': 17.0}, 'accuracy': 0.9913606911447084, 'macro avg': {'precision': 0.8292770867135388, 'recall': 0.558551049847732, 'f1-score': 0.597828447339848, 'support': 1852.0}, 'weighted avg': {'precision': 0.9889022182714295, 'recall': 0.9913606911447084, 'f1-score': 0.9883533486702172, 'support': 1852.0}} | 0.0086 | {'~SI': {'precision': 0.9835074216602528, 'recall': 0.9862183020948181, 'f1-score': 0.9848609964216901, 'support': 1814.0}, 'SI': {'precision': 0.24242424242424243, 'recall': 0.21052631578947367, 'f1-score': 0.22535211267605634, 'support': 38.0}, 'accuracy': 0.9703023758099352, 'macro avg': {'precision': 0.6129658320422476, 'recall': 0.5983723089421459, 'f1-score': 0.6051065545488732, 'support': 1852.0}, 'weighted avg': {'precision': 0.9683016112871597, 'recall': 0.9703023758099352, 'f1-score': 0.9692771208372764, 'support': 1852.0}} | 0.0297 | {'~CL': {'precision': 0.9736070381231672, 'recall': 0.9934171154997008, 'f1-score': 0.9834123222748815, 'support': 1671.0}, 'CL': {'precision': 0.9251700680272109, 'recall': 0.7513812154696132, 'f1-score': 0.8292682926829268, 'support': 181.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.9493885530751891, 'recall': 0.872399165484657, 'f1-score': 0.9063403074789042, 'support': 1852.0}, 'weighted avg': {'precision': 0.9688731873740485, 'recall': 0.9697624190064795, 'f1-score': 0.9683474900091451, 'support': 1852.0}} | 0.0302 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9967479674796748, 'recall': 0.9967479674796748, 'f1-score': 0.9967479674796748, 'support': 1845.0}, 'HI': {'precision': 0.14285714285714285, 'recall': 0.14285714285714285, 'f1-score': 0.14285714285714285, 'support': 7.0}, 'accuracy': 0.9935205183585313, 'macro avg': {'precision': 0.5698025551684088, 'recall': 0.5698025551684088, 'f1-score': 0.5698025551684088, 'support': 1852.0}, 'weighted avg': {'precision': 0.9935205183585313, 'recall': 0.9935205183585313, 'f1-score': 0.9935205183585313, 'support': 1852.0}} | 0.0065 | {'~PL': {'precision': 0.9841269841269841, 'recall': 0.9977802441731409, 'f1-score': 0.9909065858363185, 'support': 1802.0}, 'PL': {'precision': 0.84, 'recall': 0.42, 'f1-score': 0.56, 'support': 50.0}, 'accuracy': 0.9821814254859611, 'macro avg': {'precision': 0.912063492063492, 'recall': 0.7088901220865704, 'f1-score': 0.7754532929181592, 'support': 1852.0}, 'weighted avg': {'precision': 0.980235866844938, 'recall': 0.9821814254859611, 'f1-score': 0.9792730387025086, 'support': 1852.0}} | 0.0178 | {'~TI': {'precision': 0.9815332960268607, 'recall': 0.977158774373259, 'f1-score': 0.9793411501954216, 'support': 1795.0}, 'TI': {'precision': 0.36923076923076925, 'recall': 0.42105263157894735, 'f1-score': 0.39344262295081966, 'support': 57.0}, 'accuracy': 0.9600431965442765, 'macro avg': {'precision': 0.6753820326288149, 'recall': 0.6991057029761032, 'f1-score': 0.6863918865731207, 'support': 1852.0}, 'weighted avg': {'precision': 0.96268813186521, 'recall': 0.9600431965442765, 'f1-score': 0.961308636127958, 'support': 1852.0}} | 0.0400 | {'~PC': {'precision': 0.9874932028276238, 'recall': 0.9945235487404163, 'f1-score': 0.9909959072305593, 'support': 1826.0}, 'PC': {'precision': 0.23076923076923078, 'recall': 0.11538461538461539, 'f1-score': 0.15384615384615385, 'support': 26.0}, 'accuracy': 0.9821814254859611, 'macro avg': {'precision': 0.6091312167984273, 'recall': 0.5549540820625158, 'f1-score': 0.5724210305383566, 'support': 1852.0}, 'weighted avg': {'precision': 0.9768696481442986, 'recall': 0.9821814254859611, 'f1-score': 0.9792432649044284, 'support': 1852.0}} | 0.0178 | {'f1-score': 0.6976358376432111, 'hamming': 0.019168466522678188} |
| 0.0001 | 46.0 | 4508 | 0.1576 | {'~LIN': {'precision': 0.991887506760411, 'recall': 0.9994550408719346, 'f1-score': 0.995656894679696, 'support': 1835.0}, 'LIN': {'precision': 0.6666666666666666, 'recall': 0.11764705882352941, 'f1-score': 0.2, 'support': 17.0}, 'accuracy': 0.9913606911447084, 'macro avg': {'precision': 0.8292770867135388, 'recall': 0.558551049847732, 'f1-score': 0.597828447339848, 'support': 1852.0}, 'weighted avg': {'precision': 0.9889022182714295, 'recall': 0.9913606911447084, 'f1-score': 0.9883533486702172, 'support': 1852.0}} | 0.0086 | {'~SI': {'precision': 0.9835074216602528, 'recall': 0.9862183020948181, 'f1-score': 0.9848609964216901, 'support': 1814.0}, 'SI': {'precision': 0.24242424242424243, 'recall': 0.21052631578947367, 'f1-score': 0.22535211267605634, 'support': 38.0}, 'accuracy': 0.9703023758099352, 'macro avg': {'precision': 0.6129658320422476, 'recall': 0.5983723089421459, 'f1-score': 0.6051065545488732, 'support': 1852.0}, 'weighted avg': {'precision': 0.9683016112871597, 'recall': 0.9703023758099352, 'f1-score': 0.9692771208372764, 'support': 1852.0}} | 0.0297 | {'~CL': {'precision': 0.9736070381231672, 'recall': 0.9934171154997008, 'f1-score': 0.9834123222748815, 'support': 1671.0}, 'CL': {'precision': 0.9251700680272109, 'recall': 0.7513812154696132, 'f1-score': 0.8292682926829268, 'support': 181.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.9493885530751891, 'recall': 0.872399165484657, 'f1-score': 0.9063403074789042, 'support': 1852.0}, 'weighted avg': {'precision': 0.9688731873740485, 'recall': 0.9697624190064795, 'f1-score': 0.9683474900091451, 'support': 1852.0}} | 0.0302 | {'~D': {'precision': 0.9983731019522777, 'recall': 0.999457111834962, 'f1-score': 0.9989148128052089, 'support': 1842.0}, 'D': {'precision': 0.875, 'recall': 0.7, 'f1-score': 0.7777777777777778, 'support': 10.0}, 'accuracy': 0.9978401727861771, 'macro avg': {'precision': 0.9366865509761388, 'recall': 0.849728555917481, 'f1-score': 0.8883462952914933, 'support': 1852.0}, 'weighted avg': {'precision': 0.997706940494652, 'recall': 0.9978401727861771, 'f1-score': 0.9977207683396181, 'support': 1852.0}} | 0.0022 | {'~HI': {'precision': 0.9962080173347779, 'recall': 0.9967479674796748, 'f1-score': 0.9964779192630724, 'support': 1845.0}, 'HI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0}, 'accuracy': 0.9929805615550756, 'macro avg': {'precision': 0.49810400866738896, 'recall': 0.4983739837398374, 'f1-score': 0.4982389596315362, 'support': 1852.0}, 'weighted avg': {'precision': 0.9924426522584585, 'recall': 0.9929805615550756, 'f1-score': 0.9927115340390759, 'support': 1852.0}} | 0.0070 | {'~PL': {'precision': 0.9841269841269841, 'recall': 0.9977802441731409, 'f1-score': 0.9909065858363185, 'support': 1802.0}, 'PL': {'precision': 0.84, 'recall': 0.42, 'f1-score': 0.56, 'support': 50.0}, 'accuracy': 0.9821814254859611, 'macro avg': {'precision': 0.912063492063492, 'recall': 0.7088901220865704, 'f1-score': 0.7754532929181592, 'support': 1852.0}, 'weighted avg': {'precision': 0.980235866844938, 'recall': 0.9821814254859611, 'f1-score': 0.9792730387025086, 'support': 1852.0}} | 0.0178 | {'~TI': {'precision': 0.9815229563269877, 'recall': 0.9766016713091922, 'f1-score': 0.979056129572745, 'support': 1795.0}, 'TI': {'precision': 0.36363636363636365, 'recall': 0.42105263157894735, 'f1-score': 0.3902439024390244, 'support': 57.0}, 'accuracy': 0.9595032397408207, 'macro avg': {'precision': 0.6725796599816757, 'recall': 0.6988271514440698, 'f1-score': 0.6846500160058847, 'support': 1852.0}, 'weighted avg': {'precision': 0.9625059283662072, 'recall': 0.9595032397408207, 'f1-score': 0.9609339389968152, 'support': 1852.0}} | 0.0405 | {'~PC': {'precision': 0.9874932028276238, 'recall': 0.9945235487404163, 'f1-score': 0.9909959072305593, 'support': 1826.0}, 'PC': {'precision': 0.23076923076923078, 'recall': 0.11538461538461539, 'f1-score': 0.15384615384615385, 'support': 26.0}, 'accuracy': 0.9821814254859611, 'macro avg': {'precision': 0.6091312167984273, 'recall': 0.5549540820625158, 'f1-score': 0.5724210305383566, 'support': 1852.0}, 'weighted avg': {'precision': 0.9768696481442986, 'recall': 0.9821814254859611, 'f1-score': 0.9792432649044284, 'support': 1852.0}} | 0.0178 | {'f1-score': 0.6910481129691319, 'hamming': 0.01923596112311015} |
| 0.0001 | 47.0 | 4606 | 0.1579 | {'~LIN': {'precision': 0.991887506760411, 'recall': 0.9994550408719346, 'f1-score': 0.995656894679696, 'support': 1835.0}, 'LIN': {'precision': 0.6666666666666666, 'recall': 0.11764705882352941, 'f1-score': 0.2, 'support': 17.0}, 'accuracy': 0.9913606911447084, 'macro avg': {'precision': 0.8292770867135388, 'recall': 0.558551049847732, 'f1-score': 0.597828447339848, 'support': 1852.0}, 'weighted avg': {'precision': 0.9889022182714295, 'recall': 0.9913606911447084, 'f1-score': 0.9883533486702172, 'support': 1852.0}} | 0.0086 | {'~SI': {'precision': 0.9835074216602528, 'recall': 0.9862183020948181, 'f1-score': 0.9848609964216901, 'support': 1814.0}, 'SI': {'precision': 0.24242424242424243, 'recall': 0.21052631578947367, 'f1-score': 0.22535211267605634, 'support': 38.0}, 'accuracy': 0.9703023758099352, 'macro avg': {'precision': 0.6129658320422476, 'recall': 0.5983723089421459, 'f1-score': 0.6051065545488732, 'support': 1852.0}, 'weighted avg': {'precision': 0.9683016112871597, 'recall': 0.9703023758099352, 'f1-score': 0.9692771208372764, 'support': 1852.0}} | 0.0297 | {'~CL': {'precision': 0.9736070381231672, 'recall': 0.9934171154997008, 'f1-score': 0.9834123222748815, 'support': 1671.0}, 'CL': {'precision': 0.9251700680272109, 'recall': 0.7513812154696132, 'f1-score': 0.8292682926829268, 'support': 181.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.9493885530751891, 'recall': 0.872399165484657, 'f1-score': 0.9063403074789042, 'support': 1852.0}, 'weighted avg': {'precision': 0.9688731873740485, 'recall': 0.9697624190064795, 'f1-score': 0.9683474900091451, 'support': 1852.0}} | 0.0302 | {'~D': {'precision': 0.9983731019522777, 'recall': 0.999457111834962, 'f1-score': 0.9989148128052089, 'support': 1842.0}, 'D': {'precision': 0.875, 'recall': 0.7, 'f1-score': 0.7777777777777778, 'support': 10.0}, 'accuracy': 0.9978401727861771, 'macro avg': {'precision': 0.9366865509761388, 'recall': 0.849728555917481, 'f1-score': 0.8883462952914933, 'support': 1852.0}, 'weighted avg': {'precision': 0.997706940494652, 'recall': 0.9978401727861771, 'f1-score': 0.9977207683396181, 'support': 1852.0}} | 0.0022 | {'~HI': {'precision': 0.9962080173347779, 'recall': 0.9967479674796748, 'f1-score': 0.9964779192630724, 'support': 1845.0}, 'HI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0}, 'accuracy': 0.9929805615550756, 'macro avg': {'precision': 0.49810400866738896, 'recall': 0.4983739837398374, 'f1-score': 0.4982389596315362, 'support': 1852.0}, 'weighted avg': {'precision': 0.9924426522584585, 'recall': 0.9929805615550756, 'f1-score': 0.9927115340390759, 'support': 1852.0}} | 0.0070 | {'~PL': {'precision': 0.9846659364731654, 'recall': 0.9977802441731409, 'f1-score': 0.9911797133406836, 'support': 1802.0}, 'PL': {'precision': 0.8461538461538461, 'recall': 0.44, 'f1-score': 0.5789473684210527, 'support': 50.0}, 'accuracy': 0.9827213822894169, 'macro avg': {'precision': 0.9154098913135058, 'recall': 0.7188901220865704, 'f1-score': 0.7850635408808682, 'support': 1852.0}, 'weighted avg': {'precision': 0.9809264091967259, 'recall': 0.9827213822894169, 'f1-score': 0.9800503303784905, 'support': 1852.0}} | 0.0173 | {'~TI': {'precision': 0.9815126050420168, 'recall': 0.9760445682451253, 'f1-score': 0.9787709497206704, 'support': 1795.0}, 'TI': {'precision': 0.3582089552238806, 'recall': 0.42105263157894735, 'f1-score': 0.3870967741935484, 'support': 57.0}, 'accuracy': 0.958963282937365, 'macro avg': {'precision': 0.6698607801329487, 'recall': 0.6985485999120363, 'f1-score': 0.6829338619571094, 'support': 1852.0}, 'weighted avg': {'precision': 0.9623288534007458, 'recall': 0.958963282937365, 'f1-score': 0.9605606754198895, 'support': 1852.0}} | 0.0410 | {'~PC': {'precision': 0.9874932028276238, 'recall': 0.9945235487404163, 'f1-score': 0.9909959072305593, 'support': 1826.0}, 'PC': {'precision': 0.23076923076923078, 'recall': 0.11538461538461539, 'f1-score': 0.15384615384615385, 'support': 26.0}, 'accuracy': 0.9821814254859611, 'macro avg': {'precision': 0.6091312167984273, 'recall': 0.5549540820625158, 'f1-score': 0.5724210305383566, 'support': 1852.0}, 'weighted avg': {'precision': 0.9768696481442986, 'recall': 0.9821814254859611, 'f1-score': 0.9792432649044284, 'support': 1852.0}} | 0.0178 | {'f1-score': 0.6920348747083737, 'hamming': 0.01923596112311015} |
| 0.0001 | 48.0 | 4704 | 0.1580 | {'~LIN': {'precision': 0.9918918918918919, 'recall': 1.0, 'f1-score': 0.9959294436906377, 'support': 1835.0}, 'LIN': {'precision': 1.0, 'recall': 0.11764705882352941, 'f1-score': 0.21052631578947367, 'support': 17.0}, 'accuracy': 0.9919006479481641, 'macro avg': {'precision': 0.9959459459459459, 'recall': 0.5588235294117647, 'f1-score': 0.6032278797400557, 'support': 1852.0}, 'weighted avg': {'precision': 0.9919663183702061, 'recall': 0.9919006479481641, 'f1-score': 0.9887200197304219, 'support': 1852.0}} | 0.0081 | {'~SI': {'precision': 0.9829670329670329, 'recall': 0.9862183020948181, 'f1-score': 0.9845899834892681, 'support': 1814.0}, 'SI': {'precision': 0.21875, 'recall': 0.18421052631578946, 'f1-score': 0.2, 'support': 38.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.6008585164835165, 'recall': 0.5852144142053037, 'f1-score': 0.5922949917446341, 'support': 1852.0}, 'weighted avg': {'precision': 0.9672865538888756, 'recall': 0.9697624190064795, 'f1-score': 0.9684914849079548, 'support': 1852.0}} | 0.0302 | {'~CL': {'precision': 0.9736070381231672, 'recall': 0.9934171154997008, 'f1-score': 0.9834123222748815, 'support': 1671.0}, 'CL': {'precision': 0.9251700680272109, 'recall': 0.7513812154696132, 'f1-score': 0.8292682926829268, 'support': 181.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.9493885530751891, 'recall': 0.872399165484657, 'f1-score': 0.9063403074789042, 'support': 1852.0}, 'weighted avg': {'precision': 0.9688731873740485, 'recall': 0.9697624190064795, 'f1-score': 0.9683474900091451, 'support': 1852.0}} | 0.0302 | {'~D': {'precision': 0.9983731019522777, 'recall': 0.999457111834962, 'f1-score': 0.9989148128052089, 'support': 1842.0}, 'D': {'precision': 0.875, 'recall': 0.7, 'f1-score': 0.7777777777777778, 'support': 10.0}, 'accuracy': 0.9978401727861771, 'macro avg': {'precision': 0.9366865509761388, 'recall': 0.849728555917481, 'f1-score': 0.8883462952914933, 'support': 1852.0}, 'weighted avg': {'precision': 0.997706940494652, 'recall': 0.9978401727861771, 'f1-score': 0.9977207683396181, 'support': 1852.0}} | 0.0022 | {'~HI': {'precision': 0.9962080173347779, 'recall': 0.9967479674796748, 'f1-score': 0.9964779192630724, 'support': 1845.0}, 'HI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0}, 'accuracy': 0.9929805615550756, 'macro avg': {'precision': 0.49810400866738896, 'recall': 0.4983739837398374, 'f1-score': 0.4982389596315362, 'support': 1852.0}, 'weighted avg': {'precision': 0.9924426522584585, 'recall': 0.9929805615550756, 'f1-score': 0.9927115340390759, 'support': 1852.0}} | 0.0070 | {'~PL': {'precision': 0.9841269841269841, 'recall': 0.9977802441731409, 'f1-score': 0.9909065858363185, 'support': 1802.0}, 'PL': {'precision': 0.84, 'recall': 0.42, 'f1-score': 0.56, 'support': 50.0}, 'accuracy': 0.9821814254859611, 'macro avg': {'precision': 0.912063492063492, 'recall': 0.7088901220865704, 'f1-score': 0.7754532929181592, 'support': 1852.0}, 'weighted avg': {'precision': 0.980235866844938, 'recall': 0.9821814254859611, 'f1-score': 0.9792730387025086, 'support': 1852.0}} | 0.0178 | {'~TI': {'precision': 0.9815229563269877, 'recall': 0.9766016713091922, 'f1-score': 0.979056129572745, 'support': 1795.0}, 'TI': {'precision': 0.36363636363636365, 'recall': 0.42105263157894735, 'f1-score': 0.3902439024390244, 'support': 57.0}, 'accuracy': 0.9595032397408207, 'macro avg': {'precision': 0.6725796599816757, 'recall': 0.6988271514440698, 'f1-score': 0.6846500160058847, 'support': 1852.0}, 'weighted avg': {'precision': 0.9625059283662072, 'recall': 0.9595032397408207, 'f1-score': 0.9609339389968152, 'support': 1852.0}} | 0.0405 | {'~PC': {'precision': 0.9874932028276238, 'recall': 0.9945235487404163, 'f1-score': 0.9909959072305593, 'support': 1826.0}, 'PC': {'precision': 0.23076923076923078, 'recall': 0.11538461538461539, 'f1-score': 0.15384615384615385, 'support': 26.0}, 'accuracy': 0.9821814254859611, 'macro avg': {'precision': 0.6091312167984273, 'recall': 0.5549540820625158, 'f1-score': 0.5724210305383566, 'support': 1852.0}, 'weighted avg': {'precision': 0.9768696481442986, 'recall': 0.9821814254859611, 'f1-score': 0.9792432649044284, 'support': 1852.0}} | 0.0178 | {'f1-score': 0.690121596668628, 'hamming': 0.01923596112311015} |
| 0.0001 | 49.0 | 4802 | 0.1580 | {'~LIN': {'precision': 0.991887506760411, 'recall': 0.9994550408719346, 'f1-score': 0.995656894679696, 'support': 1835.0}, 'LIN': {'precision': 0.6666666666666666, 'recall': 0.11764705882352941, 'f1-score': 0.2, 'support': 17.0}, 'accuracy': 0.9913606911447084, 'macro avg': {'precision': 0.8292770867135388, 'recall': 0.558551049847732, 'f1-score': 0.597828447339848, 'support': 1852.0}, 'weighted avg': {'precision': 0.9889022182714295, 'recall': 0.9913606911447084, 'f1-score': 0.9883533486702172, 'support': 1852.0}} | 0.0086 | {'~SI': {'precision': 0.9835074216602528, 'recall': 0.9862183020948181, 'f1-score': 0.9848609964216901, 'support': 1814.0}, 'SI': {'precision': 0.24242424242424243, 'recall': 0.21052631578947367, 'f1-score': 0.22535211267605634, 'support': 38.0}, 'accuracy': 0.9703023758099352, 'macro avg': {'precision': 0.6129658320422476, 'recall': 0.5983723089421459, 'f1-score': 0.6051065545488732, 'support': 1852.0}, 'weighted avg': {'precision': 0.9683016112871597, 'recall': 0.9703023758099352, 'f1-score': 0.9692771208372764, 'support': 1852.0}} | 0.0297 | {'~CL': {'precision': 0.9736070381231672, 'recall': 0.9934171154997008, 'f1-score': 0.9834123222748815, 'support': 1671.0}, 'CL': {'precision': 0.9251700680272109, 'recall': 0.7513812154696132, 'f1-score': 0.8292682926829268, 'support': 181.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.9493885530751891, 'recall': 0.872399165484657, 'f1-score': 0.9063403074789042, 'support': 1852.0}, 'weighted avg': {'precision': 0.9688731873740485, 'recall': 0.9697624190064795, 'f1-score': 0.9683474900091451, 'support': 1852.0}} | 0.0302 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9962080173347779, 'recall': 0.9967479674796748, 'f1-score': 0.9964779192630724, 'support': 1845.0}, 'HI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0}, 'accuracy': 0.9929805615550756, 'macro avg': {'precision': 0.49810400866738896, 'recall': 0.4983739837398374, 'f1-score': 0.4982389596315362, 'support': 1852.0}, 'weighted avg': {'precision': 0.9924426522584585, 'recall': 0.9929805615550756, 'f1-score': 0.9927115340390759, 'support': 1852.0}} | 0.0070 | {'~PL': {'precision': 0.9841269841269841, 'recall': 0.9977802441731409, 'f1-score': 0.9909065858363185, 'support': 1802.0}, 'PL': {'precision': 0.84, 'recall': 0.42, 'f1-score': 0.56, 'support': 50.0}, 'accuracy': 0.9821814254859611, 'macro avg': {'precision': 0.912063492063492, 'recall': 0.7088901220865704, 'f1-score': 0.7754532929181592, 'support': 1852.0}, 'weighted avg': {'precision': 0.980235866844938, 'recall': 0.9821814254859611, 'f1-score': 0.9792730387025086, 'support': 1852.0}} | 0.0178 | {'~TI': {'precision': 0.9815332960268607, 'recall': 0.977158774373259, 'f1-score': 0.9793411501954216, 'support': 1795.0}, 'TI': {'precision': 0.36923076923076925, 'recall': 0.42105263157894735, 'f1-score': 0.39344262295081966, 'support': 57.0}, 'accuracy': 0.9600431965442765, 'macro avg': {'precision': 0.6753820326288149, 'recall': 0.6991057029761032, 'f1-score': 0.6863918865731207, 'support': 1852.0}, 'weighted avg': {'precision': 0.96268813186521, 'recall': 0.9600431965442765, 'f1-score': 0.961308636127958, 'support': 1852.0}} | 0.0400 | {'~PC': {'precision': 0.9874932028276238, 'recall': 0.9945235487404163, 'f1-score': 0.9909959072305593, 'support': 1826.0}, 'PC': {'precision': 0.23076923076923078, 'recall': 0.11538461538461539, 'f1-score': 0.15384615384615385, 'support': 26.0}, 'accuracy': 0.9821814254859611, 'macro avg': {'precision': 0.6091312167984273, 'recall': 0.5549540820625158, 'f1-score': 0.5724210305383566, 'support': 1852.0}, 'weighted avg': {'precision': 0.9768696481442986, 'recall': 0.9821814254859611, 'f1-score': 0.9792432649044284, 'support': 1852.0}} | 0.0178 | {'f1-score': 0.6886903882011021, 'hamming': 0.01923596112311015} |
| 0.0001 | 50.0 | 4900 | 0.1582 | {'~LIN': {'precision': 0.991887506760411, 'recall': 0.9994550408719346, 'f1-score': 0.995656894679696, 'support': 1835.0}, 'LIN': {'precision': 0.6666666666666666, 'recall': 0.11764705882352941, 'f1-score': 0.2, 'support': 17.0}, 'accuracy': 0.9913606911447084, 'macro avg': {'precision': 0.8292770867135388, 'recall': 0.558551049847732, 'f1-score': 0.597828447339848, 'support': 1852.0}, 'weighted avg': {'precision': 0.9889022182714295, 'recall': 0.9913606911447084, 'f1-score': 0.9883533486702172, 'support': 1852.0}} | 0.0086 | {'~SI': {'precision': 0.9835074216602528, 'recall': 0.9862183020948181, 'f1-score': 0.9848609964216901, 'support': 1814.0}, 'SI': {'precision': 0.24242424242424243, 'recall': 0.21052631578947367, 'f1-score': 0.22535211267605634, 'support': 38.0}, 'accuracy': 0.9703023758099352, 'macro avg': {'precision': 0.6129658320422476, 'recall': 0.5983723089421459, 'f1-score': 0.6051065545488732, 'support': 1852.0}, 'weighted avg': {'precision': 0.9683016112871597, 'recall': 0.9703023758099352, 'f1-score': 0.9692771208372764, 'support': 1852.0}} | 0.0297 | {'~CL': {'precision': 0.9736070381231672, 'recall': 0.9934171154997008, 'f1-score': 0.9834123222748815, 'support': 1671.0}, 'CL': {'precision': 0.9251700680272109, 'recall': 0.7513812154696132, 'f1-score': 0.8292682926829268, 'support': 181.0}, 'accuracy': 0.9697624190064795, 'macro avg': {'precision': 0.9493885530751891, 'recall': 0.872399165484657, 'f1-score': 0.9063403074789042, 'support': 1852.0}, 'weighted avg': {'precision': 0.9688731873740485, 'recall': 0.9697624190064795, 'f1-score': 0.9683474900091451, 'support': 1852.0}} | 0.0302 | {'~D': {'precision': 0.9983722192078134, 'recall': 0.998914223669924, 'f1-score': 0.9986431478968792, 'support': 1842.0}, 'D': {'precision': 0.7777777777777778, 'recall': 0.7, 'f1-score': 0.7368421052631579, 'support': 10.0}, 'accuracy': 0.9973002159827213, 'macro avg': {'precision': 0.8880749984927956, 'recall': 0.849457111834962, 'f1-score': 0.8677426265800186, 'support': 1852.0}, 'weighted avg': {'precision': 0.9971811045132668, 'recall': 0.9973002159827213, 'f1-score': 0.9972295353556605, 'support': 1852.0}} | 0.0027 | {'~HI': {'precision': 0.9962080173347779, 'recall': 0.9967479674796748, 'f1-score': 0.9964779192630724, 'support': 1845.0}, 'HI': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0}, 'accuracy': 0.9929805615550756, 'macro avg': {'precision': 0.49810400866738896, 'recall': 0.4983739837398374, 'f1-score': 0.4982389596315362, 'support': 1852.0}, 'weighted avg': {'precision': 0.9924426522584585, 'recall': 0.9929805615550756, 'f1-score': 0.9927115340390759, 'support': 1852.0}} | 0.0070 | {'~PL': {'precision': 0.9841269841269841, 'recall': 0.9977802441731409, 'f1-score': 0.9909065858363185, 'support': 1802.0}, 'PL': {'precision': 0.84, 'recall': 0.42, 'f1-score': 0.56, 'support': 50.0}, 'accuracy': 0.9821814254859611, 'macro avg': {'precision': 0.912063492063492, 'recall': 0.7088901220865704, 'f1-score': 0.7754532929181592, 'support': 1852.0}, 'weighted avg': {'precision': 0.980235866844938, 'recall': 0.9821814254859611, 'f1-score': 0.9792730387025086, 'support': 1852.0}} | 0.0178 | {'~TI': {'precision': 0.9815229563269877, 'recall': 0.9766016713091922, 'f1-score': 0.979056129572745, 'support': 1795.0}, 'TI': {'precision': 0.36363636363636365, 'recall': 0.42105263157894735, 'f1-score': 0.3902439024390244, 'support': 57.0}, 'accuracy': 0.9595032397408207, 'macro avg': {'precision': 0.6725796599816757, 'recall': 0.6988271514440698, 'f1-score': 0.6846500160058847, 'support': 1852.0}, 'weighted avg': {'precision': 0.9625059283662072, 'recall': 0.9595032397408207, 'f1-score': 0.9609339389968152, 'support': 1852.0}} | 0.0405 | {'~PC': {'precision': 0.9874932028276238, 'recall': 0.9945235487404163, 'f1-score': 0.9909959072305593, 'support': 1826.0}, 'PC': {'precision': 0.23076923076923078, 'recall': 0.11538461538461539, 'f1-score': 0.15384615384615385, 'support': 26.0}, 'accuracy': 0.9821814254859611, 'macro avg': {'precision': 0.6091312167984273, 'recall': 0.5549540820625158, 'f1-score': 0.5724210305383566, 'support': 1852.0}, 'weighted avg': {'precision': 0.9768696481442986, 'recall': 0.9821814254859611, 'f1-score': 0.9792432649044284, 'support': 1852.0}} | 0.0178 | {'f1-score': 0.6884726543801976, 'hamming': 0.01930345572354212} |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
peskova/donut-lora-r8-exp | peskova | 2025-05-28T04:11:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-28T04:11:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hejazizo/lora-BioQwen-0.5B_2025-05-27_23-39 | hejazizo | 2025-05-28T04:06:08Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:liyinghong/BioQwen-0.5B",
"base_model:adapter:liyinghong/BioQwen-0.5B",
"license:mit",
"region:us"
] | null | 2025-05-28T03:40:13Z | ---
library_name: peft
license: mit
base_model: liyinghong/BioQwen-0.5B
tags:
- generated_from_trainer
model-index:
- name: lora-BioQwen-0.5B_2025-05-27_23-39
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora-BioQwen-0.5B_2025-05-27_23-39
This model is a fine-tuned version of [liyinghong/BioQwen-0.5B](https://huggingface.co/liyinghong/BioQwen-0.5B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 36
- eval_batch_size: 36
- seed: 42
- gradient_accumulation_steps: 36
- total_train_batch_size: 1296
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.15.1
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
CodeAtCMU/Qwen3-1.7B-Base_full_sft_natural_language_data_shard_0 | CodeAtCMU | 2025-05-28T04:05:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T04:03:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Hastagaras/q4b-run-27-test | Hastagaras | 2025-05-28T04:01:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T03:58:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Video-De-do-gotinha-no-iml-portal-zacarias/Full.video.do.gotinha.no.iml.portal.zacarias.foto.do.gotinha.no.iml.morto | Video-De-do-gotinha-no-iml-portal-zacarias | 2025-05-28T03:45:40Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-28T03:45:14Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
Keltezaa/Sabrina | Keltezaa | 2025-05-28T03:40:46Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:cc-by-nc-nd-4.0",
"region:us"
] | text-to-image | 2025-05-28T03:40:38Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/Sabrina.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Sabrina
license: cc-by-nc-nd-4.0
---
# Sabrina
<Gallery />
## Model description
Sabrina
## Trigger words
You should use `Sabrina` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/Sabrina/tree/main) them in the Files & versions tab.
|
Subsets and Splits