modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-16 00:42:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 522
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-16 00:42:16
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
royokong/prompteol-llama-7b | royokong | 2023-07-27T15:07:54Z | 3 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-27T15:06:19Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
Jonathaniu/llama2-breast-cancer-13b-knowledge-epoch-7 | Jonathaniu | 2023-07-27T15:03:39Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-27T15:03:19Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
### Framework versions
- PEFT 0.4.0.dev0
|
janani4office2/results | janani4office2 | 2023-07-27T14:57:33Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"mpt",
"text-generation",
"generated_from_trainer",
"custom_code",
"base_model:mosaicml/mpt-7b-instruct",
"base_model:finetune:mosaicml/mpt-7b-instruct",
"license:cc-by-sa-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-07-27T09:53:30Z | ---
license: cc-by-sa-3.0
base_model: mosaicml/mpt-7b-instruct
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [mosaicml/mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 1.12.1+cu116
- Datasets 2.14.0
- Tokenizers 0.12.1
|
Pierre-Arthur/distilbert-base-uncased-finetuned-imdb | Pierre-Arthur | 2023-07-27T14:55:00Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-07-27T14:51:24Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7026 | 1.0 | 157 | 2.4957 |
| 2.581 | 2.0 | 314 | 2.4286 |
| 2.5363 | 3.0 | 471 | 2.4515 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
aayushi08/segformer-b0-scene-parse-150_pretrained | aayushi08 | 2023-07-27T14:52:11Z | 43 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2023-07-27T11:52:06Z | ---
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: segformer-b0-scene-parse-150_pretrained
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150_pretrained
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2284
- Mean Iou: 0.0767
- Mean Accuracy: 0.1574
- Overall Accuracy: 0.5622
- Per Category Iou: [0.5148203561012767, 0.724040099091574, 0.6958825927435793, 0.38401244431532056, 0.29543194795602395, 0.29389807778274474, 0.0, 0.12126925156299818, 0.20467349613092675, 0.04878431281437682, 0.0, 0.1679011093073593, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan]
- Per Category Accuracy: [0.8140876905468601, 0.8295938962384349, 0.867831101268203, 0.8547256107829203, 0.39126018171899396, 0.31410348287229467, 0.0, 0.16157810162353853, 0.7849884441835724, 0.9576966932725199, nan, 0.3186048004107303, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 4.7729 | 1.0 | 20 | 4.8806 | 0.0109 | 0.0500 | 0.2075 | [0.0325297525314704, 0.24495480446129927, 0.5035687103968282, 0.07590179316096747, 0.0208204321411237, 0.11755765952640118, 0.0012824676676576644, 0.11501857578251874, 0.004708489128929511, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0013707195075028857, nan, 0.0, 0.0, 0.0, 0.10670559106194026, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.012752466783029957, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.038409172339663206, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.039392859389085724, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0] | [0.032714193506590106, 0.2835194865505293, 0.7925572293142232, 0.09808227298140203, 0.023401493632310616, 0.13673498638383258, 0.0016606280193236715, 0.2387377403446556, 0.004989177886202722, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.003921838447777625, nan, nan, nan, nan, 0.1382100892304974, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.11718494271685762, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.038891307502539545, nan, nan, nan, nan, nan, nan, nan, 0.09062118191756158, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 4.6133 | 2.0 | 40 | 4.5556 | 0.0240 | 0.0928 | 0.4200 | [0.3414124883027797, 0.5189284526020218, 0.511476355875916, 0.1606769579990087, 0.2191685362703107, 0.2398429986223389, 0.015511382795680331, 0.11331394590160879, 0.15028358081340668, 0.01438743301769067, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02806674579347902, 0.0, 0.0, 0.0, 0.0006765899864682003, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.02215046624619006, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.03344654459539279, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.011403657777022819, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0] | [0.3974436187647117, 0.6709077053142973, 0.9814366002966801, 0.30133978188970545, 0.24257416429955417, 0.3673578265093243, 0.019345238095238096, 0.2245433220664561, 0.19344069848490406, 0.04469783352337514, nan, 0.0, 0.0, nan, 0.0, 0.07707055214723926, 0.0, nan, 0.0, 0.0013357079252003562, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.02593868716317696, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.14828150572831425, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0161886695389364, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 4.0018 | 3.0 | 60 | 4.0966 | 0.0381 | 0.1065 | 0.5018 | [0.4579418950126497, 0.5478506343770332, 0.6281485983096435, 0.187622528313154, 0.12857750191310263, 0.2648201387568903, 0.0, 0.17438167563464907, 0.2715138857161505, 0.007824522617422025, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0932277924362357, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0007550050195388662, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.015868077162414437, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0001977246456165967, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0] | [0.6575663269835709, 0.750747192817423, 0.9717910146320401, 0.5460234276591875, 0.14223367632950207, 0.35499976111987, 0.0, 0.37980458432611147, 0.3052202942147548, 0.0411630558722919, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0943900267141585, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0008039579468150897, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.01669394435351882, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0002089897755771333, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 3.7532 | 4.0 | 80 | 3.6052 | 0.0483 | 0.1219 | 0.5263 | [0.5050829619688341, 0.5167095890300885, 0.7748590774250136, 0.18315437529917458, 0.11704024897716543, 0.13685460073575936, 0.0, 0.2130983716844216, 0.29945226721356577, 0.057599769744830505, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan] | [0.8452302398736926, 0.7656999797573261, 0.96594446649813, 0.5077362468593599, 0.1259241144491055, 0.19461564187090918, 0.0, 0.3013058495410133, 0.4392310796434205, 0.7302166476624857, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 4.1001 | 5.0 | 100 | 3.4344 | 0.0660 | 0.1428 | 0.5519 | [0.5740286466908133, 0.5748238736928366, 0.770694415068295, 0.27976119037100783, 0.13865646665072914, 0.2115060410227592, 0.0, 0.2072166229048963, 0.2555005183734593, 0.047472124273325075, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.044365572315882874, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8366065922675107, 0.764253223943511, 0.9643113330408318, 0.7379712644065285, 0.15274929927199535, 0.28770722851273234, 0.0, 0.4467686226704346, 0.5695733519204667, 0.9087799315849487, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.04452359750667854, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 4.0427 | 6.0 | 120 | 3.2186 | 0.0650 | 0.1438 | 0.5559 | [0.5735339218911698, 0.6239798677665012, 0.7511513782853694, 0.2645688931826179, 0.12649460613253502, 0.24923481054964644, 0.0, 0.1969366951854885, 0.2184281686899488, 0.051422466461522716, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0623342175066313, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8391044267577821, 0.7624790131101082, 0.9706001156077415, 0.855931347124083, 0.1343328505050885, 0.33846129345627696, 0.0, 0.31312683548512216, 0.5571004072049598, 0.9165336374002281, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.06277827248441674, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 3.3803 | 7.0 | 140 | 3.0637 | 0.0701 | 0.1427 | 0.5502 | [0.5643446236009608, 0.6478939919910137, 0.7641745041997519, 0.26411100972559143, 0.19549661801352794, 0.1911980999487945, 0.0, 0.16826734984918662, 0.17217137814442804, 0.042858021905894904, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.003116651825467498, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8478744412265063, 0.7597745323346947, 0.9738489717178892, 0.7993762503620577, 0.20660659530841438, 0.27948178937142676, 0.0, 0.2462643837387562, 0.6370006236472358, 0.9530216647662486, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.003116651825467498, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 3.1859 | 8.0 | 160 | 3.0093 | 0.0584 | 0.1345 | 0.5279 | [0.5304954925773289, 0.630905617211838, 0.7114010240766968, 0.2654748809451504, 0.1130690161527166, 0.18241986166623642, 0.0, 0.1141937010923749, 0.150315689365187, 0.04692530210423179, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0008904719501335708, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.7662953107416481, 0.7716819875924317, 0.9095334600839897, 0.7949439905157722, 0.12022423296149289, 0.3263500708677719, 0.0, 0.1470327478251233, 0.626215194981474, 0.9174458380843785, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0008904719501335708, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 3.5901 | 9.0 | 180 | 2.8961 | 0.0623 | 0.1320 | 0.5360 | [0.5195164676654499, 0.6543788786036646, 0.6849384372869802, 0.30794058237074823, 0.1333599486209231, 0.15503567223107292, 0.0, 0.08126954631008769, 0.22258699934340118, 0.04523293026052965, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8745683977147574, 0.768743823007585, 0.8957162456734151, 0.7660669419427848, 0.14452867811659362, 0.2138693803449429, 0.0, 0.15671118006686247, 0.4974503833596243, 0.9605473204104903, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 3.6568 | 10.0 | 200 | 2.6655 | 0.0635 | 0.1318 | 0.5376 | [0.5089110011356949, 0.5947639210280143, 0.7501099711752571, 0.2960618158114864, 0.09897355720209366, 0.13247966647434348, 0.0, 0.04938747761057435, 0.25216933229927274, 0.049225711566744525, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8569419883338412, 0.791147700075017, 0.95600637931875, 0.7128461440012933, 0.10955811809853458, 0.1700348764989728, 0.0, 0.06977152250604902, 0.6321948714186141, 0.9732041049030786, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 3.0827 | 11.0 | 220 | 2.5540 | 0.0664 | 0.1328 | 0.5525 | [0.5477061884254247, 0.6076672388749504, 0.6988056319090914, 0.32100561831494234, 0.0796511455145158, 0.1849044459508501, 0.0, 0.06290754292548194, 0.2377632194665419, 0.04846934405354012, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8413215248067838, 0.8561951512842191, 0.8976592914499022, 0.7920475289140964, 0.10084839820162154, 0.21502396763970505, 0.0, 0.08161097874069559, 0.5591914597013831, 0.9678449258836944, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.7122 | 12.0 | 240 | 2.6093 | 0.0663 | 0.1347 | 0.5440 | [0.48626172067111173, 0.6938522126174008, 0.6745183497862148, 0.32800975475961913, 0.13442052689527517, 0.1590950988912591, 0.0, 0.03191117986488059, 0.28731424271802514, 0.055913045911087485, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8683034161065935, 0.8176626260701826, 0.8805792922856208, 0.7411102204678796, 0.1584679922496661, 0.24051247750545443, 0.0, 0.04305424724330914, 0.7284199713855974, 0.9115165336374003, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.7849 | 13.0 | 260 | 2.5046 | 0.0657 | 0.1399 | 0.5480 | [0.4882436604761502, 0.6822540965256525, 0.7004956509062636, 0.3247556811491817, 0.13196717267240105, 0.11096064594061923, 0.0, 0.02708401300129288, 0.3101351020607959, 0.04951936249885834, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.858285684121115, 0.8391704671294697, 0.9035476255144893, 0.7431512155034791, 0.1509433962264151, 0.13897249693437166, 0.0, 0.041401156240187656, 0.9297112880149675, 0.9891676168757126, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.9403 | 14.0 | 280 | 2.6340 | 0.0659 | 0.1387 | 0.5293 | [0.48312476897435797, 0.6488606361658413, 0.6648547679594857, 0.3053698024726054, 0.20489118952038876, 0.12576909929926508, 0.0, 0.013371640156689207, 0.34921209139450415, 0.05279407025459233, 0.0, 0.05094082693736073, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8812484853429183, 0.765260892344697, 0.8539546901225025, 0.7465461379389318, 0.21890930980642978, 0.18750497666937396, 0.0, 0.021878059141870302, 0.8853222788803697, 0.9339794754846066, nan, 0.05281735335643691, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.3078 | 15.0 | 300 | 2.5251 | 0.0644 | 0.1402 | 0.5368 | [0.5032657249511785, 0.6702271327640467, 0.6718064850372001, 0.30504826506652755, 0.1492842535787321, 0.16564926971140018, 0.0, 0.016966269440517066, 0.18991325708144624, 0.048684350697773514, nan, 0.05048798798798799, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.7896420250455297, 0.8229019063835867, 0.8759932863938046, 0.8191058690395199, 0.16890836923192687, 0.20635261892249135, 0.0, 0.026375574887792984, 0.9075901537107011, 0.9379703534777651, nan, 0.051790527531767425, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.1205 | 16.0 | 320 | 2.4200 | 0.0671 | 0.1408 | 0.5424 | [0.4851497985135246, 0.6844669447293905, 0.6787579124670596, 0.3294613919560565, 0.20455656925074622, 0.08834832285596292, 0.0, 0.026740147090214036, 0.2962578442229605, 0.05154904633008221, nan, 0.04160365166222124, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8799191862962226, 0.8123995308462628, 0.8699970053416348, 0.7538950672585327, 0.22818337440508663, 0.1226490213877343, 0.0, 0.038178090541364215, 0.9421475475989581, 0.9359179019384265, nan, 0.048549608522654344, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.7337 | 17.0 | 340 | 2.4611 | 0.0715 | 0.1478 | 0.5546 | [0.491625569615171, 0.7171170389611052, 0.6864015302376366, 0.3032877086334042, 0.21901611424079653, 0.1455949153673077, 0.0, 0.015259275152876733, 0.3620399802217984, 0.052233755188337394, 0.0, 0.15355001924874884, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8814404418839574, 0.8366475750467368, 0.8821915327775804, 0.7332965101005678, 0.25158486803739727, 0.19115984265762107, 0.0, 0.02233058126004322, 0.9401298653655673, 0.9944127708095781, nan, 0.17918110640482607, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.2253 | 18.0 | 360 | 2.5361 | 0.0683 | 0.1453 | 0.5332 | [0.4819568731474902, 0.680265368149286, 0.6843025301041807, 0.2899856590091187, 0.3087323785295647, 0.14888743830235568, 0.0, 0.024825875282443104, 0.17798215487023333, 0.06359447004608294, 0.0, 0.14556064830128054, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8169910332300767, 0.7745591264690823, 0.8805827744465106, 0.7940817879924826, 0.41321319061682876, 0.20704537129934866, 0.0, 0.03936942428104394, 0.7376279393961628, 0.9519954389965792, nan, 0.19769605955589784, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.3414 | 19.0 | 380 | 2.4640 | 0.0683 | 0.1439 | 0.5352 | [0.4849713967261648, 0.7036371871576641, 0.6922972055523594, 0.3356658592901123, 0.2402872807341619, 0.1596577580552716, 0.0, 0.047547589564925385, 0.2061945641719802, 0.04166013276880456, 0.0, 0.025974025974025976, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.7955566859662973, 0.7954239649444517, 0.8841519893585164, 0.7990663963302506, 0.3058748283451532, 0.21515137037567883, 0.0, 0.08368888642618348, 0.8494075351260134, 0.9996579247434435, nan, 0.026761648055448596, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.508 | 20.0 | 400 | 2.4162 | 0.0730 | 0.1498 | 0.5541 | [0.4861101723555255, 0.7257792619019059, 0.699673591241319, 0.33684785322016975, 0.2880978687290836, 0.1881996877887158, 0.0, 0.04428891975638423, 0.2535444554403875, 0.05175622381069756, 0.0, 0.1379656130528339, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8455877589313779, 0.8375540300782319, 0.8830133227475643, 0.7271667890365561, 0.33845632912583007, 0.23519341327855015, 0.0, 0.061247483422914244, 0.8928794159727063, 0.9576966932725199, nan, 0.2119111795661661, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.9607 | 21.0 | 420 | 2.3918 | 0.0702 | 0.1504 | 0.5396 | [0.507695581246897, 0.6985780592300636, 0.6698981931830353, 0.3268301579730071, 0.3054300659810973, 0.21641804793868566, 0.0, 0.019582922325922552, 0.18294713323002632, 0.04517401704445434, 0.0, 0.1149816335083697, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.80723964094529, 0.8035641990450221, 0.8493826128742452, 0.7845975602362973, 0.3866325551646946, 0.2806045259821955, 0.0, 0.02860124489758224, 0.9311786932756154, 0.9948688711516533, nan, 0.14965986394557823, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.9236 | 22.0 | 440 | 2.3403 | 0.0712 | 0.1488 | 0.5518 | [0.5364709094748005, 0.7157040135965486, 0.6919605889395992, 0.3555111122884162, 0.2598097326773754, 0.2303148717750308, 0.0, 0.01760396975425331, 0.2036683013326684, 0.04360612209112998, 0.0, 0.07802606547602146, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8108484239168252, 0.8148301401507484, 0.8874809351691285, 0.9040260816263295, 0.3298218551891495, 0.28440272004841305, 0.0, 0.030272806191241387, 0.8090172053266811, 0.9924743443557583, nan, 0.08817866769349249, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.9482 | 23.0 | 460 | 2.3411 | 0.0720 | 0.1514 | 0.5668 | [0.49056244853922376, 0.7266009942762334, 0.7052889865732858, 0.3548955744562617, 0.22703973358581736, 0.19574884192344205, 0.0, 0.05695680486216627, 0.23538302848330728, 0.049893043654919395, 0.0, 0.19959628089062884, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8291610779319563, 0.8825776068396423, 0.8928016770086845, 0.7743386974005941, 0.2616302037284373, 0.2183762521300145, 0.0, 0.08481557414898136, 0.8455189111852967, 0.9521094640820981, nan, 0.3141124374278013, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.2529 | 24.0 | 480 | 2.3104 | 0.0725 | 0.1497 | 0.5836 | [0.5113420655011527, 0.8045573464173746, 0.6962598456991187, 0.3590822991203078, 0.27860642520466383, 0.1485592640462252, 0.0, 0.023266297678379122, 0.2656858185022889, 0.046793389845020975, 0.0, 0.1291229211186472, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8680610709735316, 0.9369812814803349, 0.9006539498150973, 0.7750998605656857, 0.33802366485449314, 0.17114965043874317, 0.0, 0.025738349864243365, 0.7962874646905609, 0.9970353477765108, nan, 0.17837889872930304, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.0998 | 25.0 | 500 | 2.4495 | 0.0716 | 0.1541 | 0.5391 | [0.43688653221954715, 0.733287497831381, 0.6921168863095847, 0.3378376929961361, 0.28901953901953903, 0.25230697522202, 0.0, 0.0300467152913023, 0.14611836498363812, 0.051168724933002056, 0.0, 0.1828301028913999, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.6452835078138309, 0.8493051999857111, 0.872507643343153, 0.8492627494830153, 0.37175266652871575, 0.4051645884095361, 0.0, 0.04359912081417041, 0.823948053853773, 0.9557582668187001, nan, 0.34838274932614555, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.5943 | 26.0 | 520 | 2.3525 | 0.0733 | 0.1537 | 0.5399 | [0.4964431098359949, 0.6897805528483147, 0.7012728391399409, 0.3524738700399841, 0.32639010699877213, 0.2303413215501499, 0.0, 0.05761208001790838, 0.1800597813262015, 0.0469448823964735, 0.0, 0.14580612004539711, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8009074745477623, 0.7669056096021719, 0.8890339789259623, 0.8123160241686145, 0.42004176150792905, 0.3254264010319622, 0.0, 0.07843408876821632, 0.8397593455372537, 0.998175598631699, nan, 0.21848928250545502, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.6678 | 27.0 | 540 | 2.2825 | 0.0787 | 0.1577 | 0.5861 | [0.5126736049233109, 0.772986405128431, 0.7114639216183581, 0.38754642455125743, 0.29371878188946776, 0.2553816111517934, 0.0, 0.13962258581117482, 0.20097674111646413, 0.04935255174150135, 0.0, 0.1400173193495622, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.828335664805488, 0.9144300496540885, 0.8986586716252638, 0.7745811918602693, 0.4115013450215392, 0.30824295701750193, 0.0, 0.17471971334109085, 0.7820169485307605, 0.9834663625997719, nan, 0.23347452188422538, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.1303 | 28.0 | 560 | 2.3182 | 0.0758 | 0.1610 | 0.5611 | [0.5244217877873839, 0.7196797467040382, 0.7154193001600868, 0.3697657853229992, 0.29826594815907514, 0.2688369361764598, 0.0, 0.1257064600856439, 0.16174030561725586, 0.05200386136455641, 0.0, 0.1740916271721959, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.767178310830428, 0.8255051737893094, 0.899459568629909, 0.8811642428447295, 0.3727496755017965, 0.2916328253149236, 0.0, 0.22718457361334293, 0.88686305440405, 0.9521094640820981, nan, 0.335932486202028, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.0753 | 29.0 | 580 | 2.3862 | 0.0727 | 0.1559 | 0.5459 | [0.5125531180422713, 0.6913985422892837, 0.7135806413409606, 0.354653629423774, 0.33716292636466455, 0.23221484314434854, 0.0, 0.07243706665192746, 0.19018123761937145, 0.043853324272872043, 0.0, 0.12527584076264453, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8158584896379459, 0.768236267727224, 0.9021895827674822, 0.8389904147328856, 0.4261931187569367, 0.28621820903603906, 0.0, 0.10892853844590976, 0.9439084339117356, 0.9639680729760547, nan, 0.1821653189577718, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.1803 | 30.0 | 600 | 2.3013 | 0.0740 | 0.1545 | 0.5533 | [0.5097797474754986, 0.7108171722194596, 0.7005830611824793, 0.35823921708559114, 0.32186401376318347, 0.26049934774566624, 0.0, 0.05290972927345461, 0.2489013269204167, 0.04425081424655228, 0.0, 0.12392266480316795, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8224881886740842, 0.797866481704195, 0.8769717736038276, 0.8723805546387169, 0.40472920860061323, 0.3196056885321612, 0.0, 0.07988400657542342, 0.858101911295352, 0.9589509692132269, nan, 0.18778077268643306, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.7699 | 31.0 | 620 | 2.3166 | 0.0734 | 0.1549 | 0.5524 | [0.4792127260216525, 0.7026433433542578, 0.7059736466564126, 0.3817381108982241, 0.3173152259075477, 0.20516239705695122, 0.0, 0.13988002699771732, 0.18151654002499318, 0.04790945097194372, 0.0, 0.14376245178245942, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8472337862707883, 0.8057000988318787, 0.8854403888877281, 0.7193463427120311, 0.3885513271506236, 0.24379309795677861, 0.0, 0.17034225448366302, 0.92718001394035, 0.988939566704675, nan, 0.21765498652291104, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.7281 | 32.0 | 640 | 2.3447 | 0.0759 | 0.1618 | 0.5498 | [0.5100269861160145, 0.6908621861959551, 0.7038890577210998, 0.36374877913651393, 0.3435310328652262, 0.27837409064273155, 0.0, 0.1098173826075146, 0.19403438199688816, 0.04861480541801367, 0.0, 0.17298451681793914, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.807661945335576, 0.7712979721603697, 0.8791272311945901, 0.8579656062024694, 0.4408472695122181, 0.3101778860701034, 0.0, 0.1642747640420384, 0.9378553872115631, 0.9482326111744583, nan, 0.3534847901424721, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.8111 | 33.0 | 660 | 2.3018 | 0.0769 | 0.1575 | 0.5579 | [0.48684512810972097, 0.7188739727624192, 0.6881599572448257, 0.38048206300778636, 0.298146582950978, 0.24303529909110125, 0.0, 0.13649151841125362, 0.21524011803518422, 0.049986482197642026, 0.0, 0.1652892561983471, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8288227545283747, 0.8261377573498767, 0.8698437902624853, 0.757714354998417, 0.37160217460825073, 0.2746643734174191, 0.0, 0.18280046545132156, 0.9071132470009905, 0.9697833523375142, nan, 0.3112565781029393, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.2645 | 34.0 | 680 | 2.2879 | 0.0764 | 0.1586 | 0.5647 | [0.523132132565937, 0.7332575089854492, 0.6917468223029607, 0.3768701209447672, 0.3189359143399452, 0.2538414921554104, 0.0, 0.11942452590998628, 0.23668586179507545, 0.04691834451901566, 0.0, 0.1351541120553075, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8254851101710573, 0.8271662637977638, 0.8724867503778144, 0.8643512936405828, 0.4140785191595026, 0.3001767712961636, 0.0, 0.1696496185885004, 0.874536850214608, 0.9565564424173318, nan, 0.24088692080605828, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.3385 | 35.0 | 700 | 2.2476 | 0.0758 | 0.1573 | 0.5636 | [0.5046784645767062, 0.730853100421637, 0.6906982792843356, 0.3974850939489274, 0.3106473345049795, 0.2400536151853843, 0.0, 0.1487451411188102, 0.2111970669754061, 0.046564458308630825, 0.0, 0.1293910893957243, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8192153296493674, 0.8335992665007561, 0.8784830314299842, 0.8365856780077733, 0.3990105156229425, 0.2738044049495963, 0.0, 0.19507397351360337, 0.8833412817784951, 0.958266818700114, nan, 0.21499165704017456, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.3315 | 36.0 | 720 | 2.2589 | 0.0760 | 0.1589 | 0.5651 | [0.5223323082506124, 0.7408631116141162, 0.6837550061879653, 0.3814286554522997, 0.31727334903868076, 0.27626367677228175, 0.0, 0.11901900163268142, 0.19294514689905456, 0.04649480322961883, 0.0, 0.13994002024794178, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8012409990378179, 0.8457299865445755, 0.8638092054405282, 0.8736603865092248, 0.39268985496341163, 0.30881626932938383, 0.0, 0.15820727360041373, 0.9186323782970762, 0.9599771949828962, nan, 0.23507893723527146, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 2.2562 | 37.0 | 740 | 2.2604 | 0.0766 | 0.1562 | 0.5559 | [0.49336952101419806, 0.7146194028978202, 0.6916788345482708, 0.3836707024845504, 0.3132657940350248, 0.27580309286465676, 0.0, 0.13351638033194538, 0.1759989723129005, 0.046297154256623806, 0.0, 0.1402667526292461, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8034389014327157, 0.8187551350900799, 0.8676465467410456, 0.8196649534882154, 0.3943828890686431, 0.30688930294778083, 0.0, 0.19041022515284164, 0.8544333981437323, 0.9685290763968073, nan, 0.22339879347965602, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.67 | 38.0 | 760 | 2.2727 | 0.0739 | 0.1558 | 0.5591 | [0.4765426711244061, 0.7216823770278837, 0.6934404914710587, 0.3876700969962654, 0.30409929078014186, 0.2594024527502083, 0.0, 0.1448988355027462, 0.2239744052840704, 0.0478213699439367, 0.0, 0.14076731509378101, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8122737012340406, 0.8268656005525059, 0.8710033498387759, 0.8125046309705841, 0.40329953535619556, 0.28261908174478045, 0.0, 0.2087973993830923, 0.7961407241644961, 0.9599771949828962, nan, 0.2593697856501091, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.5761 | 39.0 | 780 | 2.2565 | 0.0749 | 0.1581 | 0.5552 | [0.49867026315457086, 0.6929315525434497, 0.6960117538985977, 0.3931985791879709, 0.3232063734899765, 0.24912395255196432, 0.0, 0.13885954321360297, 0.25672207215790005, 0.04745549809317958, 0.0, 0.14958776967762108, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8525893737657795, 0.779177730677177, 0.8759271253368991, 0.8313989909536095, 0.4212645083617073, 0.273422196741675, 0.0, 0.2039304778264162, 0.8402729373784805, 0.9563283922462942, nan, 0.2916827108201771, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.6946 | 40.0 | 800 | 2.2601 | 0.0750 | 0.1576 | 0.5534 | [0.4944224623891422, 0.6888145453780341, 0.7037414414835037, 0.3911177333985678, 0.31352842930796604, 0.2932870405087212, 0.0, 0.1482581406124942, 0.21531963361966683, 0.04624139613029104, 0.0, 0.15333549531676235, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8260225884859668, 0.779637656136507, 0.892268906392551, 0.8292569565598119, 0.40610244737485657, 0.31142802541684583, 0.0, 0.19501856264199036, 0.8322022084449173, 0.9728620296465222, nan, 0.25792581183416763, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.6296 | 41.0 | 820 | 2.2467 | 0.0751 | 0.1579 | 0.5549 | [0.4864490946546677, 0.7077958079871842, 0.6997543297983504, 0.38735074938756614, 0.3056326068497028, 0.2837938054384179, 0.0, 0.13901094040281914, 0.15548142780975296, 0.05017387576219512, 0.0, 0.16343096368023266, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8073908067213583, 0.8075680808754361, 0.8767872190766702, 0.8115952767468021, 0.4062529392953216, 0.3108069370789738, 0.0, 0.17413789918915423, 0.8344400014674053, 0.960775370581528, nan, 0.3281992042099859, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.8938 | 42.0 | 840 | 2.2589 | 0.0760 | 0.1583 | 0.5496 | [0.47144056301698833, 0.6844253220677846, 0.7050681830729311, 0.38940204180845894, 0.3170161841805334, 0.2829081766277303, 0.0, 0.14849037976661433, 0.19469181838122396, 0.05107984749389309, 0.0, 0.1739419420657248, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8139581198816588, 0.777559805194032, 0.8894065701411668, 0.8039297574381807, 0.4185932767734532, 0.30104470243498477, 0.0, 0.193079182135535, 0.8140430683444, 0.9608893956670468, nan, 0.3610897189064305, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 0.8033 | 43.0 | 860 | 2.2384 | 0.0757 | 0.1567 | 0.5482 | [0.47514768777939764, 0.6799096805891669, 0.7066319360377584, 0.37413968719359536, 0.3190275365914165, 0.28839185669174466, 0.0, 0.12979580141159439, 0.21690195696621556, 0.04958852948626566, 0.0, 0.1692121050969704, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8250364117563783, 0.7744668436908348, 0.8841868109674139, 0.7942299790511731, 0.4112567956507835, 0.3156084276909847, 0.0, 0.15896455551245822, 0.8172713599178253, 0.9598631698973774, nan, 0.32672314208702347, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.9207 | 44.0 | 880 | 2.2248 | 0.0778 | 0.1569 | 0.5629 | [0.5048158614958531, 0.7323207951853928, 0.6968780658627216, 0.3717431595955756, 0.30453854251959295, 0.2582376063809487, 0.0, 0.1284858912594632, 0.2154746927320236, 0.04812190423775454, 0.0, 0.16130029364793158, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8147979297487049, 0.8380288398566342, 0.8731274679815306, 0.843052196932445, 0.39180571493067967, 0.2694010478875034, 0.0, 0.17241092702388208, 0.8167944532081147, 0.9571265678449259, nan, 0.3014054678475164, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.9397 | 45.0 | 900 | 2.2397 | 0.0739 | 0.1556 | 0.5609 | [0.5104979289382761, 0.7335484971743466, 0.6945526654100276, 0.38850570478760144, 0.30587837075482344, 0.28333104638650364, 0.0, 0.1343887001712975, 0.17689109754822732, 0.04549396448174262, 0.0, 0.12715929031000195, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.7987455640043094, 0.8382982460318406, 0.8635097396040087, 0.8525498966030568, 0.40798359638066933, 0.31239150860764736, 0.0, 0.17098871465248147, 0.8069995230932903, 0.9630558722919043, nan, 0.20927993839045053, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.9628 | 46.0 | 920 | 2.2622 | 0.0746 | 0.1568 | 0.5597 | [0.5243409717367976, 0.7234668087061871, 0.6961363883642491, 0.3877068557919622, 0.3073133918770582, 0.3141209752305267, 0.0, 0.12196637124161351, 0.16876002030244194, 0.04680424142652056, 0.0, 0.14262956861751236, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.7937450961102407, 0.8248353794310618, 0.86889664250047, 0.86718713162734, 0.4213209428318817, 0.34747503702642013, 0.0, 0.16318501690031584, 0.7684434498697678, 0.9628278221208666, nan, 0.2528237710178411, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.3656 | 47.0 | 940 | 2.2377 | 0.0741 | 0.1562 | 0.5470 | [0.4763894452883486, 0.6767419874917132, 0.7003469975886608, 0.3897728739192154, 0.3134054542013075, 0.2600953343490263, 0.0, 0.13592144099973935, 0.24371247768943696, 0.04822769497637688, 0.0, 0.1622893246626674, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8360331221011563, 0.7703438873078434, 0.8707770093809415, 0.8137979347555184, 0.4049173235011945, 0.2745927093784339, 0.0, 0.17819212796217285, 0.8265160130599069, 0.9637400228050171, nan, 0.3099088692080606, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 0.901 | 48.0 | 960 | 2.2366 | 0.0771 | 0.1589 | 0.5660 | [0.5126108756723662, 0.7402906082099798, 0.6984062665053278, 0.3948832465096225, 0.29967275107264923, 0.30957178465350227, 0.0, 0.14647758400448116, 0.1584974262887711, 0.048483525823995954, 0.0, 0.16160454458326798, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.7955350908554303, 0.8506849763637013, 0.8702512030865874, 0.8266568770755168, 0.3875919411576591, 0.33030751835395666, 0.0, 0.1980292199996306, 0.834770167651051, 0.9625997719498289, nan, 0.2975869593120267, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.4665 | 49.0 | 980 | 2.2347 | 0.0757 | 0.1564 | 0.5550 | [0.502378498083232, 0.7047110323622909, 0.6973560251743418, 0.39012057813622936, 0.30475148618887915, 0.28088014418744367, 0.0, 0.13636174463126352, 0.19038196980247626, 0.04744125986020268, 0.0, 0.15255730337078652, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8197576068778029, 0.8051047260689917, 0.871222725974831, 0.8227096061485818, 0.39827686751067554, 0.2978198206806491, 0.0, 0.18155372084002883, 0.822443963461609, 0.9635119726339795, nan, 0.27230137337954047, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
| 1.6342 | 50.0 | 1000 | 2.2284 | 0.0767 | 0.1574 | 0.5622 | [0.5148203561012767, 0.724040099091574, 0.6958825927435793, 0.38401244431532056, 0.29543194795602395, 0.29389807778274474, 0.0, 0.12126925156299818, 0.20467349613092675, 0.04878431281437682, 0.0, 0.1679011093073593, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | [0.8140876905468601, 0.8295938962384349, 0.867831101268203, 0.8547256107829203, 0.39126018171899396, 0.31410348287229467, 0.0, 0.16157810162353853, 0.7849884441835724, 0.9576966932725199, nan, 0.3186048004107303, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
Hadihandrian22/vonny | Hadihandrian22 | 2023-07-27T14:33:26Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-07-27T14:33:26Z | ---
license: creativeml-openrail-m
---
|
IIC/mdeberta-v3-base-meddocan | IIC | 2023-07-27T14:28:46Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"text-classification",
"biomedical",
"clinical",
"spanish",
"mdeberta-v3-base",
"token-classification",
"es",
"dataset:bigbio/meddocan",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-06-21T15:41:36Z | ---
language: es
tags:
- biomedical
- clinical
- spanish
- mdeberta-v3-base
license: mit
datasets:
- "bigbio/meddocan"
metrics:
- f1
model-index:
- name: IIC/mdeberta-v3-base-meddocan
results:
- task:
type: token-classification
dataset:
name: meddocan
type: bigbio/meddocan
split: test
metrics:
- name: f1
type: f1
value: 0.974
pipeline_tag: token-classification
---
# mdeberta-v3-base-meddocan
This model is a finetuned version of mdeberta-v3-base for the meddocan dataset used in a benchmark in the paper TODO. The model has a F1 of 0.974
Please refer to the original publication for more information TODO LINK
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 16 |
| learning rate | 4e-05 |
| classifier dropout | 0.2 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtex
TODO
```
|
flavioloss/gpt2-joker | flavioloss | 2023-07-27T14:16:21Z | 157 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"jokes",
"en",
"dataset:Fraser/short-jokes",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-07-27T00:05:22Z | ---
license: afl-3.0
datasets:
- Fraser/short-jokes
language:
- en
library_name: transformers
tags:
- jokes
pipeline_tag: text-generation
---
Model trained to tell jokes
Example Prompt:
You are a comedian at a comedy club. The audience is going to ask you to tell jokes about a specific topic. Tell the joke in one output as clear as possible.
Audience: Tell me a joke about dogs
Comedian: |
deinon-daemon/superllama-7b-dollybricks-cqa-lora | deinon-daemon | 2023-07-27T14:06:03Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-27T14:05:46Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
reach-vb/musicgen-large-endpoint | reach-vb | 2023-07-27T14:04:06Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"musicgen",
"text-to-audio",
"arxiv:2306.05284",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| text-to-audio | 2023-07-27T11:46:07Z | ---
inference: false
tags:
- musicgen
license: cc-by-nc-4.0
duplicated_from: facebook/musicgen-large
---
# MusicGen - Large - 3.3B
MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts.
It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass.
By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio.
MusicGen was published in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez*.
Four checkpoints are released:
- [small](https://huggingface.co/facebook/musicgen-small)
- [medium](https://huggingface.co/facebook/musicgen-medium)
- [**large** (this checkpoint)](https://huggingface.co/facebook/musicgen-large)
- [melody](https://huggingface.co/facebook/musicgen-melody)
## Example
Try out MusicGen yourself!
* Audiocraft Colab:
<a target="_blank" href="https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Colab:
<a target="_blank" href="https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/MusicGen.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Demo:
<a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
## 🤗 Transformers Usage
You can run MusicGen locally with the 🤗 Transformers library from version 4.31.0 onwards.
1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) from main:
```
pip install git+https://github.com/huggingface/transformers.git
```
2. Run the following Python code to generate text-conditional audio samples:
```py
from transformers import AutoProcessor, MusicgenForConditionalGeneration
processor = AutoProcessor.from_pretrained("facebook/musicgen-large")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-large")
inputs = processor(
text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"],
padding=True,
return_tensors="pt",
)
audio_values = model.generate(**inputs, max_new_tokens=256)
```
3. Listen to the audio samples either in an ipynb notebook:
```py
from IPython.display import Audio
sampling_rate = model.config.audio_encoder.sampling_rate
Audio(audio_values[0].numpy(), rate=sampling_rate)
```
Or save them as a `.wav` file using a third-party library, e.g. `scipy`:
```py
import scipy
sampling_rate = model.config.audio_encoder.sampling_rate
scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy())
```
For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen).
## Audiocraft Usage
You can also run MusicGen locally through the original [Audiocraft library]((https://github.com/facebookresearch/audiocraft):
1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft)
```
pip install git+https://github.com/facebookresearch/audiocraft.git
```
2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed:
```
apt get install ffmpeg
```
3. Run the following Python code:
```py
from audiocraft.models import MusicGen
from audiocraft.data.audio import audio_write
model = MusicGen.get_pretrained("large")
model.set_generation_params(duration=8) # generate 8 seconds.
descriptions = ["happy rock", "energetic EDM"]
wav = model.generate(descriptions) # generates 2 samples.
for idx, one_wav in enumerate(wav):
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness")
```
## Model details
**Organization developing the model:** The FAIR team of Meta AI.
**Model date:** MusicGen was trained between April 2023 and May 2023.
**Model version:** This is the version 1 of the model.
**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation.
**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation][https://arxiv.org/abs/2306.05284].
**Citation details**:
```
@misc{copet2023simple,
title={Simple and Controllable Music Generation},
author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez},
year={2023},
eprint={2306.05284},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
**License** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
## Intended use
**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including:
- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs
**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
**Out-of-scope use cases** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
## Metrics
**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
- Overall quality of the music samples;
- Text relevance to the provided text input;
- Adherence to the melody for melody-guided music generation.
More details on performance measures and human studies can be found in the paper.
**Decision thresholds:** Not applicable.
## Evaluation datasets
The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
## Training datasets
The model was trained using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
More information can be found in the paper [Simple and Controllable Music Generation][arxiv], in the Experimental Setup section.
## Limitations and biases
**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
**Mitigations:** All vocals have been removed from the data source using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs). The model is therefore not able to produce vocals.
**Limitations:**
- The model is not able to generate realistic vocals.
- The model has been trained with English descriptions and will not perform as well in other languages.
- The model does not perform equally well for all music styles and cultures.
- The model sometimes generates end of songs, collapsing to silence.
- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks. |
SigSegev/t5-large_PREFIX_TUNING_SEQ2SEQ_v2 | SigSegev | 2023-07-27T13:41:44Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-27T13:41:29Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
xinyangli/woman_portrait | xinyangli | 2023-07-27T13:32:27Z | 2 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-07-27T13:07:13Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a portrait of a sks woman
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - xinyangli/woman_portrait
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a portrait of a sks woman using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
morenolq/bart-it-ilpost | morenolq | 2023-07-27T13:27:40Z | 127 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"it",
"dataset:ARTeLab/ilpost",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-12-27T16:15:11Z | ---
language: "it"
license: mit
datasets:
- ARTeLab/ilpost
tags:
- bart
- pytorch
pipeline:
- summarization
---
# BART-IT - Il Post
BART-IT is a sequence-to-sequence model, based on the BART architecture that is specifically tailored to the Italian language. The model is pre-trained on a [large corpus of Italian text](https://huggingface.co/datasets/gsarti/clean_mc4_it), and can be fine-tuned on a variety of tasks.
## Model description
The model is a `base-`sized BART model, with a vocabulary size of 52,000 tokens. It has 140M parameters and can be used for any task that requires a sequence-to-sequence model. It is trained from scratch on a large corpus of Italian text, and can be fine-tuned on a variety of tasks.
## Pre-training
The code used to pre-train BART-IT together with additional information on model parameters can be found [here](https://github.com/MorenoLaQuatra/bart-it).
## Fine-tuning
The model has been fine-tuned for the abstractive summarization task on 3 different Italian datasets:
- [FanPage](https://huggingface.co/datasets/ARTeLab/fanpage) - finetuned model [here](https://huggingface.co/morenolq/bart-it-fanpage)
- **This model** [IlPost](https://huggingface.co/datasets/ARTeLab/ilpost) - finetuned model [here](https://huggingface.co/morenolq/bart-it-ilpost)
- [WITS](https://huggingface.co/datasets/Silvia/WITS) - finetuned model [here](https://huggingface.co/morenolq/bart-it-WITS)
## Usage
In order to use the model, you can use the following code:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("morenolq/bart-it-ilpost")
model = AutoModelForSeq2SeqLM.from_pretrained("morenolq/bart-it-ilpost")
input_ids = tokenizer.encode("Il modello BART-IT è stato pre-addestrato su un corpus di testo italiano", return_tensors="pt")
outputs = model.generate(input_ids, max_length=40, num_beams=4, early_stopping=True)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
# Citation
If you find this model useful for your research, please cite the following paper:
```bibtex
@Article{BARTIT,
AUTHOR = {La Quatra, Moreno and Cagliero, Luca},
TITLE = {BART-IT: An Efficient Sequence-to-Sequence Model for Italian Text Summarization},
JOURNAL = {Future Internet},
VOLUME = {15},
YEAR = {2023},
NUMBER = {1},
ARTICLE-NUMBER = {15},
URL = {https://www.mdpi.com/1999-5903/15/1/15},
ISSN = {1999-5903},
DOI = {10.3390/fi15010015}
}
```
|
undrwolf/Pyramid | undrwolf | 2023-07-27T13:14:28Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-07-27T13:10:10Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: undrwolf/Pyramid
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
xinyangli/woman_photo | xinyangli | 2023-07-27T13:07:00Z | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-07-27T12:41:48Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of a sks person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - xinyangli/woman_photo
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of a sks person using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
aronmal/a2c-AntBulletEnv-v0 | aronmal | 2023-07-27T13:03:53Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-27T13:02:47Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1527.35 +/- 59.46
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaalan/sbert_large_nlu_ru | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaalan | 2023-07-27T13:03:18Z | 46 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"jax",
"bert",
"PyTorch",
"Transformers",
"ru",
"region:us"
]
| null | 2023-07-27T09:07:35Z | ---
library_name: sentence-transformers
language:
- ru
tags:
- PyTorch
- Transformers
---
# BERT large model (uncased) for Sentence Embeddings in Russian language.
The model is described [in this article](https://habr.com/ru/company/sberdevices/blog/527576/)
For better quality, use mean token embeddings.
## Usage (HuggingFace Models Repository)
You can use the model directly from the model repository to compute sentence embeddings:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
#Sentences we want sentence embeddings for
sentences = ['Привет! Как твои дела?',
'А правда, что 42 твое любимое число?']
#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained("sberbank-ai/sbert_large_nlu_ru")
model = AutoModel.from_pretrained("sberbank-ai/sbert_large_nlu_ru")
#Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=24, return_tensors='pt')
#Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
#Perform pooling. In this case, mean pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
```
# Authors
- [SberDevices](https://sberdevices.ru/) Team.
- Denis Antykhov: [Github](https://github.com/gaphex);
- Aleksandr Abramov: [Github](https://github.com/Ab1992ao), [Kaggle Competitions Master](https://www.kaggle.com/andrilko)
|
liuyt75/t5-base_prefix_tuning_sentences_66agree_15 | liuyt75 | 2023-07-27T12:49:30Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-26T12:18:50Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
greg-szopinski/Reinforce-10_000s | greg-szopinski | 2023-07-27T12:45:19Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-27T12:45:09Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-10_000s
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 464.40 +/- 106.80
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
apple/coreml-stable-diffusion-xl-base | apple | 2023-07-27T12:41:14Z | 22 | 67 | null | [
"coreml",
"text-to-image",
"stable-diffusion",
"core-ml",
"arxiv:2307.01952",
"arxiv:2211.01324",
"arxiv:2108.01073",
"arxiv:2112.10752",
"license:openrail++",
"region:us"
]
| text-to-image | 2023-07-26T14:44:27Z | ---
license: openrail++
tags:
- text-to-image
- stable-diffusion
- core-ml
---
# SD-XL 1.0-base Model Card (Core ML)
This model was generated by Hugging Face using [Apple’s repository](https://github.com/apple/ml-stable-diffusion) which has [ASCL](https://github.com/apple/ml-stable-diffusion/blob/main/LICENSE.md). This version contains Core ML weights with the `ORIGINAL` attention implementation, suitable for running on macOS GPUs.
The Core ML weights are also distributed as a zip archive for use in the [Hugging Face demo app](https://github.com/huggingface/swift-coreml-diffusers) and other third party tools. The zip archive was created from the contents of the `original/compiled` folder in this repo. Please, refer to https://huggingface.co/blog/diffusers-coreml for details.
The remaining contents of this model card were copied from the [original repo](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)

## Model

[SDXL](https://arxiv.org/abs/2307.01952) consists of an [ensemble of experts](https://arxiv.org/abs/2211.01324) pipeline for latent diffusion:
In a first step, the base model is used to generate (noisy) latents,
which are then further processed with a refinement model (available here: https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/) specialized for the final denoising steps.
Note that the base model can be used as a standalone module.
Alternatively, we can use a two-stage pipeline as follows:
First, the base model is used to generate latents of the desired output size.
In the second step, we use a specialized high-resolution model and apply a technique called SDEdit (https://arxiv.org/abs/2108.01073, also known as "img2img")
to the latents generated in the first step, using the same prompt. This technique is slightly slower than the first one, as it requires more function evaluations.
Source code is available at https://github.com/Stability-AI/generative-models .
### Model Description
- **Developed by:** Stability AI
- **Model type:** Diffusion-based text-to-image generative model
- **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md)
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses two fixed, pretrained text encoders ([OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip) and [CLIP-ViT/L](https://github.com/openai/CLIP/tree/main)).
- **Resources for more information:** Check out our [GitHub Repository](https://github.com/Stability-AI/generative-models) and the [SDXL report on arXiv](https://arxiv.org/abs/2307.01952).
### Model Sources
For research purposes, we recommned our `generative-models` Github repository (https://github.com/Stability-AI/generative-models), which implements the most popoular diffusion frameworks (both training and inference) and for which new functionalities like distillation will be added over time.
[Clipdrop](https://clipdrop.co/stable-diffusion) provides free SDXL inference.
- **Repository:** https://github.com/Stability-AI/generative-models
- **Demo:** https://clipdrop.co/stable-diffusion
## Evaluation

The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0.9 and Stable Diffusion 1.5 and 2.1.
The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance.
### 🧨 Diffusers
Make sure to upgrade diffusers to >= 0.18.0:
```
pip install diffusers --upgrade
```
In addition make sure to install `transformers`, `safetensors`, `accelerate` as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
You can use the model then as follows
```py
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16")
pipe.to("cuda")
# if using torch < 2.0
# pipe.enable_xformers_memory_efficient_attention()
prompt = "An astronaut riding a green horse"
images = pipe(prompt=prompt).images[0]
```
When using `torch >= 2.0`, you can improve the inference speed by 20-30% with torch.compile. Simple wrap the unet with torch compile before running the pipeline:
```py
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
```
If you are limited by GPU VRAM, you can enable *cpu offloading* by calling `pipe.enable_model_cpu_offload`
instead of `.to("cuda")`:
```diff
- pipe.to("cuda")
+ pipe.enable_model_cpu_offload()
```
## Uses
### Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
Excluded uses are described below.
### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model struggles with more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The autoencoding part of the model is lossy.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. |
xinyangli/person | xinyangli | 2023-07-27T12:29:49Z | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-07-27T12:04:37Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a sks person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - xinyangli/person
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a sks person using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
asenella/ms_MMVAEPlus_beta_10_scale_False_seed_3 | asenella | 2023-07-27T12:15:36Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
]
| null | 2023-07-27T12:15:34Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
jordyvl/rvlcdip-tiny_rvl_cdip-NK1000_kd_CEKD_t2.5_a0.5 | jordyvl | 2023-07-27T12:14:48Z | 167 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-07-27T06:53:45Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: rvlcdip-tiny_rvl_cdip-NK1000_kd_CEKD_t2.5_a0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rvlcdip-tiny_rvl_cdip-NK1000_kd_CEKD_t2.5_a0.5
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6215
- Accuracy: 0.7963
- Brier Loss: 0.3076
- Nll: 1.6291
- F1 Micro: 0.7963
- F1 Macro: 0.7978
- Ece: 0.0919
- Aurc: 0.0682
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 125 | 1.3808 | 0.541 | 0.5996 | 3.3159 | 0.541 | 0.5235 | 0.1039 | 0.2209 |
| No log | 2.0 | 250 | 1.0577 | 0.6525 | 0.4662 | 2.6310 | 0.6525 | 0.6396 | 0.0871 | 0.1302 |
| No log | 3.0 | 375 | 0.9165 | 0.7075 | 0.4104 | 2.2685 | 0.7075 | 0.7041 | 0.0788 | 0.1048 |
| 1.3004 | 4.0 | 500 | 0.8505 | 0.7298 | 0.3804 | 2.1171 | 0.7298 | 0.7380 | 0.0622 | 0.0934 |
| 1.3004 | 5.0 | 625 | 0.8063 | 0.745 | 0.3603 | 2.1178 | 0.745 | 0.7359 | 0.0588 | 0.0814 |
| 1.3004 | 6.0 | 750 | 0.7441 | 0.7662 | 0.3348 | 1.9219 | 0.7663 | 0.7636 | 0.0545 | 0.0741 |
| 1.3004 | 7.0 | 875 | 0.6987 | 0.7732 | 0.3193 | 1.8601 | 0.7732 | 0.7741 | 0.0509 | 0.0697 |
| 0.4682 | 8.0 | 1000 | 0.7033 | 0.773 | 0.3240 | 1.8889 | 0.7730 | 0.7733 | 0.0516 | 0.0776 |
| 0.4682 | 9.0 | 1125 | 0.6973 | 0.7865 | 0.3151 | 1.9589 | 0.7865 | 0.7838 | 0.0441 | 0.0760 |
| 0.4682 | 10.0 | 1250 | 0.7068 | 0.7748 | 0.3252 | 2.0362 | 0.7748 | 0.7749 | 0.0515 | 0.0791 |
| 0.4682 | 11.0 | 1375 | 0.6988 | 0.7768 | 0.3285 | 1.9227 | 0.7768 | 0.7801 | 0.0555 | 0.0840 |
| 0.1899 | 12.0 | 1500 | 0.7048 | 0.7762 | 0.3303 | 1.9777 | 0.7762 | 0.7719 | 0.0627 | 0.0809 |
| 0.1899 | 13.0 | 1625 | 0.6842 | 0.7785 | 0.3240 | 1.9360 | 0.7785 | 0.7784 | 0.0614 | 0.0808 |
| 0.1899 | 14.0 | 1750 | 0.6993 | 0.7742 | 0.3319 | 1.9508 | 0.7742 | 0.7727 | 0.0731 | 0.0759 |
| 0.1899 | 15.0 | 1875 | 0.6936 | 0.7742 | 0.3333 | 1.9042 | 0.7742 | 0.7760 | 0.0717 | 0.0853 |
| 0.1304 | 16.0 | 2000 | 0.6818 | 0.7837 | 0.3233 | 1.9541 | 0.7837 | 0.7855 | 0.0713 | 0.0853 |
| 0.1304 | 17.0 | 2125 | 0.6757 | 0.78 | 0.3255 | 1.8818 | 0.78 | 0.7829 | 0.0755 | 0.0834 |
| 0.1304 | 18.0 | 2250 | 0.7018 | 0.781 | 0.3348 | 2.0078 | 0.7810 | 0.7829 | 0.0786 | 0.0876 |
| 0.1304 | 19.0 | 2375 | 0.6872 | 0.7775 | 0.3340 | 1.8345 | 0.7775 | 0.7786 | 0.0864 | 0.0787 |
| 0.11 | 20.0 | 2500 | 0.7054 | 0.7758 | 0.3379 | 1.9542 | 0.7758 | 0.7747 | 0.0731 | 0.0847 |
| 0.11 | 21.0 | 2625 | 0.7006 | 0.782 | 0.3371 | 1.8610 | 0.782 | 0.7813 | 0.0821 | 0.0891 |
| 0.11 | 22.0 | 2750 | 0.7046 | 0.775 | 0.3428 | 1.8464 | 0.775 | 0.7772 | 0.0833 | 0.0814 |
| 0.11 | 23.0 | 2875 | 0.6620 | 0.789 | 0.3201 | 1.8174 | 0.7890 | 0.7908 | 0.0761 | 0.0799 |
| 0.0979 | 24.0 | 3000 | 0.6886 | 0.783 | 0.3324 | 1.8706 | 0.7830 | 0.7848 | 0.0807 | 0.0773 |
| 0.0979 | 25.0 | 3125 | 0.6600 | 0.7847 | 0.3236 | 1.8218 | 0.7847 | 0.7863 | 0.0833 | 0.0749 |
| 0.0979 | 26.0 | 3250 | 0.6777 | 0.7798 | 0.3349 | 1.7189 | 0.7798 | 0.7812 | 0.0951 | 0.0752 |
| 0.0979 | 27.0 | 3375 | 0.6554 | 0.7857 | 0.3212 | 1.7356 | 0.7857 | 0.7888 | 0.0871 | 0.0709 |
| 0.087 | 28.0 | 3500 | 0.6460 | 0.7955 | 0.3140 | 1.7680 | 0.7955 | 0.7970 | 0.0761 | 0.0696 |
| 0.087 | 29.0 | 3625 | 0.6371 | 0.7935 | 0.3136 | 1.6350 | 0.7935 | 0.7946 | 0.0830 | 0.0706 |
| 0.087 | 30.0 | 3750 | 0.6334 | 0.7915 | 0.3127 | 1.7187 | 0.7915 | 0.7933 | 0.0857 | 0.0712 |
| 0.087 | 31.0 | 3875 | 0.6293 | 0.7977 | 0.3075 | 1.7781 | 0.7977 | 0.7999 | 0.0799 | 0.0661 |
| 0.0793 | 32.0 | 4000 | 0.6273 | 0.7973 | 0.3076 | 1.6439 | 0.7973 | 0.7976 | 0.0782 | 0.0695 |
| 0.0793 | 33.0 | 4125 | 0.6320 | 0.7933 | 0.3123 | 1.6486 | 0.7932 | 0.7954 | 0.0899 | 0.0679 |
| 0.0793 | 34.0 | 4250 | 0.6345 | 0.79 | 0.3154 | 1.6402 | 0.79 | 0.7903 | 0.0922 | 0.0675 |
| 0.0793 | 35.0 | 4375 | 0.6209 | 0.793 | 0.3098 | 1.6026 | 0.793 | 0.7943 | 0.0863 | 0.0630 |
| 0.0733 | 36.0 | 4500 | 0.6187 | 0.7947 | 0.3076 | 1.6282 | 0.7947 | 0.7967 | 0.0880 | 0.0666 |
| 0.0733 | 37.0 | 4625 | 0.6146 | 0.7957 | 0.3051 | 1.6186 | 0.7957 | 0.7971 | 0.0885 | 0.0623 |
| 0.0733 | 38.0 | 4750 | 0.6169 | 0.7983 | 0.3062 | 1.6182 | 0.7983 | 0.7996 | 0.0835 | 0.0650 |
| 0.0733 | 39.0 | 4875 | 0.6180 | 0.7953 | 0.3074 | 1.6241 | 0.7953 | 0.7975 | 0.0889 | 0.0655 |
| 0.0693 | 40.0 | 5000 | 0.6204 | 0.7977 | 0.3069 | 1.6048 | 0.7977 | 0.7987 | 0.0824 | 0.0659 |
| 0.0693 | 41.0 | 5125 | 0.6140 | 0.7967 | 0.3055 | 1.6065 | 0.7967 | 0.7986 | 0.0911 | 0.0662 |
| 0.0693 | 42.0 | 5250 | 0.6162 | 0.7957 | 0.3062 | 1.6182 | 0.7957 | 0.7971 | 0.0883 | 0.0655 |
| 0.0693 | 43.0 | 5375 | 0.6169 | 0.796 | 0.3058 | 1.6212 | 0.796 | 0.7976 | 0.0879 | 0.0662 |
| 0.0673 | 44.0 | 5500 | 0.6173 | 0.7973 | 0.3063 | 1.6161 | 0.7973 | 0.7990 | 0.0877 | 0.0666 |
| 0.0673 | 45.0 | 5625 | 0.6193 | 0.797 | 0.3070 | 1.6151 | 0.797 | 0.7986 | 0.0881 | 0.0678 |
| 0.0673 | 46.0 | 5750 | 0.6209 | 0.7963 | 0.3076 | 1.6211 | 0.7963 | 0.7979 | 0.0894 | 0.0678 |
| 0.0673 | 47.0 | 5875 | 0.6211 | 0.7977 | 0.3075 | 1.6284 | 0.7977 | 0.7993 | 0.0905 | 0.0691 |
| 0.0662 | 48.0 | 6000 | 0.6206 | 0.7967 | 0.3072 | 1.6289 | 0.7967 | 0.7983 | 0.0892 | 0.0673 |
| 0.0662 | 49.0 | 6125 | 0.6213 | 0.7965 | 0.3075 | 1.6262 | 0.7965 | 0.7980 | 0.0886 | 0.0684 |
| 0.0662 | 50.0 | 6250 | 0.6215 | 0.7963 | 0.3076 | 1.6291 | 0.7963 | 0.7978 | 0.0919 | 0.0682 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
nikbhi/spaceinvador_dqn_v1 | nikbhi | 2023-07-27T12:08:39Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-27T12:08:00Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 699.00 +/- 289.28
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga nikbhi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga nikbhi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga nikbhi
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
asenella/ms_MMVAEPlus_beta_5_scale_False_seed_1 | asenella | 2023-07-27T12:05:36Z | 0 | 1 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
]
| null | 2023-07-27T12:05:35Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/ms_MMVAEPlus_beta_25_scale_False_seed_2 | asenella | 2023-07-27T12:05:32Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
]
| null | 2023-07-27T12:05:30Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/ms_MMVAEPlus_beta_10_scale_True_seed_2 | asenella | 2023-07-27T12:01:53Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
]
| null | 2023-07-27T12:01:50Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
dhinman/Reinforce-Pixelcopter-200000 | dhinman | 2023-07-27T11:58:35Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-27T11:58:23Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-200000
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 182.70 +/- 200.09
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
snob/TagMyBookmark-KoAlpaca-QLoRA-v1.0_ALLDATA | snob | 2023-07-27T11:58:28Z | 3 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-27T11:58:20Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
asenella/ms_MMVAEPlus_beta_5_scale_True_seed_3 | asenella | 2023-07-27T11:58:19Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
]
| null | 2023-07-27T11:58:17Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/ms_MMVAEPlus_beta_25_scale_True_seed_3 | asenella | 2023-07-27T11:57:42Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
]
| null | 2023-07-27T11:57:40Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/ms_MMVAEPlus_beta_10_scale_False_seed_2 | asenella | 2023-07-27T11:52:31Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
]
| null | 2023-07-27T11:52:29Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Chat-Error/Kimiko_7B | Chat-Error | 2023-07-27T11:50:53Z | 0 | 15 | null | [
"arxiv:1910.09700",
"region:us"
]
| null | 2023-07-26T14:59:07Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Kimiko_7B
<!-- Provide a quick summary of what the model is/does. -->
This is my new Kimiko models, trained with LLaMA2 for...purpose
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** nRuaif
- **Model type:** Decoder only
- **License:** CC BY-NC-SA
- **Finetuned from model [optional]:** LLaMA2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/OpenAccess-AI-Collective/axolotl
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model is trained on 3k examples of instructions dataset, high quality roleplay, for best result follow this format
```
<<HUMAN>>
How to do abc
<<AIBOT>>
Here is how
Or with system prompting for roleplay
<<SYSTEM>>
A's Persona:
B's Persona:
Scenario:
Add some instruction here on how you want your RP to go.
```
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
All bias of this model come from LLaMA2 with an exception of NSFW bias.....
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
3000 examples from LIMAERP, LIMA and I sample 1000 good instruction from Airboro
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
Model is trained with 1 L4 from GCP costing a whooping 1.5USD
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
3 epochs with 0.0002 lr, full 4096 ctx token, LoRA
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
It takes 8 hours to train this model with xformers enable
[More Information Needed]
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** L4 with 12CPUs 48gb ram
- **Hours used:** 8
- **Cloud Provider:** GCP
- **Compute Region:** US
- **Carbon Emitted:** 0.2KG
|
MheniDevs/Kinyarwanda | MheniDevs | 2023-07-27T11:43:04Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-07-24T02:16:52Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-kinyarwanda
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-kinyarwanda
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3917
- Wer: 0.3246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 9.0634 | 0.12 | 400 | 3.0554 | 1.0 |
| 2.8009 | 0.24 | 800 | 1.5927 | 0.9554 |
| 0.9022 | 0.36 | 1200 | 0.7328 | 0.6445 |
| 0.6213 | 0.48 | 1600 | 0.6138 | 0.5510 |
| 0.5299 | 0.6 | 2000 | 0.6072 | 0.5223 |
| 0.4999 | 0.72 | 2400 | 0.5449 | 0.4969 |
| 0.4731 | 0.84 | 2800 | 0.5261 | 0.4828 |
| 0.458 | 0.96 | 3200 | 0.5058 | 0.4607 |
| 0.4158 | 1.09 | 3600 | 0.4892 | 0.4463 |
| 0.4037 | 1.21 | 4000 | 0.4759 | 0.4429 |
| 0.4021 | 1.33 | 4400 | 0.4615 | 0.4330 |
| 0.3934 | 1.45 | 4800 | 0.4593 | 0.4315 |
| 0.3808 | 1.57 | 5200 | 0.4736 | 0.4344 |
| 0.3838 | 1.69 | 5600 | 0.4569 | 0.4249 |
| 0.3726 | 1.81 | 6000 | 0.4473 | 0.4140 |
| 0.3623 | 1.93 | 6400 | 0.4403 | 0.4097 |
| 0.3517 | 2.05 | 6800 | 0.4389 | 0.4061 |
| 0.333 | 2.17 | 7200 | 0.4383 | 0.4104 |
| 0.3354 | 2.29 | 7600 | 0.4360 | 0.3955 |
| 0.3257 | 2.41 | 8000 | 0.4226 | 0.3942 |
| 0.3275 | 2.53 | 8400 | 0.4206 | 0.4040 |
| 0.3262 | 2.65 | 8800 | 0.4172 | 0.3875 |
| 0.3206 | 2.77 | 9200 | 0.4209 | 0.3877 |
| 0.323 | 2.89 | 9600 | 0.4177 | 0.3825 |
| 0.3099 | 3.01 | 10000 | 0.4101 | 0.3691 |
| 0.3008 | 3.14 | 10400 | 0.4055 | 0.3709 |
| 0.2918 | 3.26 | 10800 | 0.4085 | 0.3800 |
| 0.292 | 3.38 | 11200 | 0.4089 | 0.3713 |
| 0.292 | 3.5 | 11600 | 0.4092 | 0.3730 |
| 0.2785 | 3.62 | 12000 | 0.4151 | 0.3687 |
| 0.2941 | 3.74 | 12400 | 0.4004 | 0.3639 |
| 0.2838 | 3.86 | 12800 | 0.4108 | 0.3703 |
| 0.2854 | 3.98 | 13200 | 0.3911 | 0.3596 |
| 0.2683 | 4.1 | 13600 | 0.3944 | 0.3575 |
| 0.2647 | 4.22 | 14000 | 0.3836 | 0.3538 |
| 0.2704 | 4.34 | 14400 | 0.4006 | 0.3540 |
| 0.2664 | 4.46 | 14800 | 0.3974 | 0.3553 |
| 0.2662 | 4.58 | 15200 | 0.3890 | 0.3470 |
| 0.2615 | 4.7 | 15600 | 0.3856 | 0.3507 |
| 0.2553 | 4.82 | 16000 | 0.3814 | 0.3497 |
| 0.2587 | 4.94 | 16400 | 0.3837 | 0.3440 |
| 0.2522 | 5.06 | 16800 | 0.3834 | 0.3486 |
| 0.2451 | 5.19 | 17200 | 0.3897 | 0.3414 |
| 0.2423 | 5.31 | 17600 | 0.3864 | 0.3481 |
| 0.2434 | 5.43 | 18000 | 0.3808 | 0.3416 |
| 0.2525 | 5.55 | 18400 | 0.3795 | 0.3408 |
| 0.2427 | 5.67 | 18800 | 0.3841 | 0.3411 |
| 0.2411 | 5.79 | 19200 | 0.3804 | 0.3366 |
| 0.2404 | 5.91 | 19600 | 0.3800 | 0.3328 |
| 0.2372 | 6.03 | 20000 | 0.3749 | 0.3335 |
| 0.2244 | 6.15 | 20400 | 0.3820 | 0.3327 |
| 0.2381 | 6.27 | 20800 | 0.3789 | 0.3325 |
| 0.2294 | 6.39 | 21200 | 0.3867 | 0.3298 |
| 0.2378 | 6.51 | 21600 | 0.3843 | 0.3281 |
| 0.2312 | 6.63 | 22000 | 0.3813 | 0.3277 |
| 0.2411 | 6.75 | 22400 | 0.3780 | 0.3268 |
| 0.2315 | 6.87 | 22800 | 0.3790 | 0.3280 |
| 0.241 | 6.99 | 23200 | 0.3776 | 0.3281 |
| 0.2313 | 7.11 | 23600 | 0.3929 | 0.3283 |
| 0.2423 | 7.24 | 24000 | 0.3905 | 0.3280 |
| 0.2337 | 7.36 | 24400 | 0.3979 | 0.3249 |
| 0.2368 | 7.48 | 24800 | 0.3980 | 0.3257 |
| 0.2409 | 7.6 | 25200 | 0.3937 | 0.3229 |
| 0.2416 | 7.72 | 25600 | 0.3867 | 0.3237 |
| 0.2364 | 7.84 | 26000 | 0.3912 | 0.3253 |
| 0.234 | 7.96 | 26400 | 0.3917 | 0.3246 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
timxiaohangt/dt-ppo_eval_halfcheetah-2607_2255 | timxiaohangt | 2023-07-27T11:41:39Z | 33 | 0 | transformers | [
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| null | 2023-07-26T21:57:30Z | ---
base_model: ''
tags:
- generated_from_trainer
model-index:
- name: dt-ppo_eval_halfcheetah-2607_2255
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dt-ppo_eval_halfcheetah-2607_2255
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1024
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.0
- Tokenizers 0.13.3
|
The-matt/4_law-qlora-polyglot-12.8b | The-matt | 2023-07-27T11:41:26Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-27T11:41:19Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
asenella/ms_MMVAEPlus_beta_5_scale_True_seed_1 | asenella | 2023-07-27T11:41:13Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
]
| null | 2023-07-27T11:41:11Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
rehanhaider/DBSD-1.5-9-vectors | rehanhaider | 2023-07-27T11:30:54Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-07-27T11:11:06Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: in the style of wlat_mntn
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - rehanhaider/DBSD-1.5-9-vectors
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on in the style of wlat_mntn using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
mpterradillos/beans_vit_model | mpterradillos | 2023-07-27T11:16:53Z | 224 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-07-27T11:09:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: beans_vit_model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beans_vit_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0068
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1356 | 3.85 | 500 | 0.0068 | 1.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
xinyangli/person-finetuned | xinyangli | 2023-07-27T11:11:41Z | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-07-27T08:50:42Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a sks woman
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - xinyangli/person-finetuned
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a sks woman using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
The-matt/3_law-qlora-polyglot-12.8b | The-matt | 2023-07-27T10:32:37Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-27T10:32:09Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
GrantC/micro-orca | GrantC | 2023-07-27T10:20:16Z | 5 | 1 | peft | [
"peft",
"region:us"
]
| null | 2023-07-26T21:13:06Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
kratos0619/kratos01 | kratos0619 | 2023-07-27T10:20:10Z | 0 | 0 | null | [
"license:llama2",
"region:us"
]
| null | 2023-07-27T10:18:54Z | ---
license: llama2
---
# ⚠️ Type of model/library unknown.
# Feel free to open a Pull request
# for integration of the huggingface model hub
# into the corresponding library =) |
MYTH-Lab/BatGPT-15B-sirius | MYTH-Lab | 2023-07-27T10:10:52Z | 17 | 5 | transformers | [
"transformers",
"pytorch",
"batgpt",
"feature-extraction",
"BatGPT",
"MLP",
"text-generation",
"custom_code",
"zh",
"en",
"arxiv:1911.02150",
"arxiv:2104.09864",
"arxiv:2307.00360",
"region:us"
]
| text-generation | 2023-07-24T04:10:26Z | ---
language:
- zh
- en
tags:
- BatGPT
- MLP
pipeline_tag: text-generation
inference: false
---
# BatGPT-15B-sirius
Bidirectional Autoregressive Talker from Generative Pre-trained Transformer
## 介绍 (Introduction)
BatGPT-15B-sirius 是上海交通大学与武汉大学<font size=1>(或武汉大学与上海交通大学,排名不分先后)</font>联合自然语言处理团队设计、预训练、对齐的系列大型语言模型 [BatGPT](https://github.com/zcli-charlie/BatGPT) 中的一个开源可商用版本。
BatGPT系列模型中还包括BatGPT-30B-orion,BatGPT-70B-alhena,以及BatGPT-140B-menkalinan。
BatGPT-15B-sirius 包含 150 亿参数,在中英文 1T 语料上进行了预训练,在权威的中文和英文 benchmark 上均取得同不错的效果。BatGPT-15B-sirius 有如下几个特点:
1. **支持长达32K的上下文**:BatGPT-15B-sirius 采用旋转位置编码RoPE,在预训练阶段采用 2048 序列长度,并且在指令微调阶段逐步扩展到了 32K 上下文。
2. **高效的预训练目标与模型架构**:BatGPT-15B-sirius 采用双向自回归预训练目标,以提高对于训练数据的运用程度,并且基于 [Multi-Query Attention](http://arxiv.org/abs/1911.02150) 技术,在保证参数规模的前提下尽可能的减少推理显存的占用,提高推理速度。
3. **商业友好的开放协议**:BatGPT-15B-sirius 的源码以及权重不仅支持自由的学术研究使用,也允许免费开源商用,助推大模型进一步帮助人类的日常生活。
BatGPT-15B-sirius is an open-source commercially available version of the series of large-scale language models [BatGPT](https://github.com/zcli-charlie/BatGPT), designed, pretrained, and aligned by the joint natural language processing teams of Shanghai Jiao Tong University and Wuhan University <font size=1>(or Wuhan University and Shanghai Jiao Tong University, in no particular order)</font>.
The BatGPT series of models also include BatGPT-30B-orion, BatGPT-70B-alhena, and BatGPT-140B-menkalinan.
BatGPT-15B-sirius contains 15 billion parameters and has been pretrained on 1T Chinese and English corpora. It achieves excellent performance on authoritative Chinese and English benchmarks. BatGPT-15B-sirius has the following characteristics:
1. **Supports Contexts Up to 32K Tokens**: BatGPT-15B-sirius uses rotated positional encoding (RoPE) and is pretrained with a sequence length of 2048 tokens. During fine-tuning, it gradually expands to support contexts up to 32K tokens.
2. **Efficient Pre-training Objectives and Model Architecture**: BatGPT-15B-sirius employs a bidirectional autoregressive pretraining objective to better utilize the training data. It also utilizes the [Multi-Query Attention](http://arxiv.org/abs/1911.02150) technique to reduce inference memory consumption and improve inference speed while maintaining model size.
3. **Business-friendly Open License**: The source code and weights of BatGPT-15B-sirius are not only available for academic research but also allow free and open-source commercial use, further facilitating the integration of large language models into human daily life.
## 软件依赖
```shell
pip install protobuf transformers cpm_kernels torch>=2.0 streamlit sentencepiece accelerate deepspeed
```
## 简易使用
如下是一个使用 BatGPT-15B-sirius 进行对话的示例:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("MLP-lab/BatGPT-15B-sirius", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("MLP-lab/BatGPT-15B-sirius", torch_dtype=torch.float16, trust_remote_code=True).cuda()
model = model.eval()
history = []
system_prompt = None # 你也可以指定系统提示
response, history = model.chat(tokenizer, "你好", history=history, system_prompt=system_prompt)
print(response)
response, history = model.chat(tokenizer, "介绍一下你自己", history=history, system_prompt=system_prompt)
print(response)
```
Here is an example of a conversation using BatGPT-15B-sirius:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("MLP-lab/BatGPT-15B-sirius", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("MLP-lab/BatGPT-15B-sirius", torch_dtype=torch.float16, trust_remote_code=True).cuda()
model = model.eval()
history = []
system_prompt = None # You can give a system prompt here.
response, history = model.chat(tokenizer, "Hello", history=history, system_prompt=system_prompt)
print(response)
response, history = model.chat(tokenizer, "Please introduce yourself", history=history, system_prompt=system_prompt)
print(response)
```
## 模型详情 (Model Details)
BatGPT-15B-sirius 具体参数和见下表:
| 模型名称 | 隐含层维度 | 层数 | Query头数 | Key/Value头数 |词表大小 | 总参数量 | 训练数据(tokens) | 位置编码 | 最大长度 |
|-------------------------|-------|------------|------------|------------|-----------------|--------|--------|----------------|---------|
| BatGPT-15B-sirius | 5,632 | 48 | 44 | 2 | 65,536 | 15,030,081,024 | 1T | [RoPE](https://arxiv.org/abs/2104.09864) | 32K |
The specific parameters of BatGPT-15B-sirius are as follows:
| Model Name | Hidden Size | Num Layers | Query Heads | Key/Value Heads |Vocab Size | Total Params | Training Dats(tokens) | Position Embedding | Max Length |
|-------------------------|-------|------------|------------|------------|-----------------|--------|--------|----------------|---------|
| BatGPT-15B-sirius | 5,632 | 48 | 44 | 2 | 65,536 | 15,030,081,024 | 1T | [RoPE](https://arxiv.org/abs/2104.09864) | 32K |
- **Developed by:** MLP Lab of Wuhan University, Shanghai Jiao Tong University
- **Email**: [email protected], [email protected]
- **Language(s) (NLP):** Chinese/English
- **License:** The code in this project is licensed under the Apache 2.0 license, the model weights are licensed under the GNU AGPL 3.0 license. If you intend to use the models included in this project for commercial purposes or public deployment, please email to us to obtain authorization. Commercial usage information will be used for record purposes only and no fees will be charged.
## 免责声明 (Disclaimers)
BatGPT-15B-sirius 模型的使用应当遵循社会的公序良俗,不能被用于任何危害国家社会安全或违法的活动。另外,我们也要求使用者不要将 BatGPT-15B-sirius 模型用于未经适当安全审查和备案的互联网服务。我们希望所有的使用者都能遵守这个原则,确保科技的发展能在规范和合法的环境下进行。
我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。如使用本项目所含模型及其修改版本提供服务产生误导性或有害性言论,造成不良影响,由服务提供方负责,与本项目无关。
The use of the BatGPT-15B-sirius model should adhere to societal norms and not be used for any activities that jeopardize national or social security or violate the law. Additionally, we also request users not to use the BatGPT-15B-sirius model for internet services that have not undergone appropriate security review and documentation. We hope that all users will abide by this principle to ensure that technological development occurs in a regulated and legal environment.
We have done our best to ensure the compliance of the data used during the model training process. However, despite our significant efforts, unforeseen issues may still arise due to the complexity of the model and data. If misleading or harmful statements are generated through the use of the models included in this project or their modified versions while providing services, the responsibility lies with the service provider and is not associated with this project.
## 引用
如果你觉得我们的工作有帮助的话,请考虑引用我们的BatGPT论文:
If you find our work helpful, please consider citing our BatGPT paper:
```
@article{li2023batgpt,
title={BatGPT: A Bidirectional Autoregessive Talker from Generative Pre-trained Transformer},
author={Li, Zuchao and Zhang, Shitou and Zhao, Hai and Yang, Yifei and Yang, Dongjie},
journal={arXiv preprint arXiv:2307.00360},
year={2023}
}
```
|
Naruke/a2c-PandaReachDense-v2 | Naruke | 2023-07-27T09:56:47Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-27T08:41:20Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.91 +/- 0.29
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
smeintadmin/image_intents | smeintadmin | 2023-07-27T09:51:44Z | 14 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"en",
"dataset:smeintadmin/image_intents",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-26T09:22:06Z | ---
license: openrail
datasets:
- smeintadmin/image_intents
language:
- en
library_name: transformers
---
Image data intents model to help classify text into either an image intent, or not an image intent. An image intent is classified as text that is asking for an image or an image to be generated. |
Vageesh1/falcon-7b-pi-ai | Vageesh1 | 2023-07-27T09:43:35Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-27T09:43:33Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
ketong3906/my_awesome_mc_model | ketong3906 | 2023-07-27T09:32:58Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"dataset:swag",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| multiple-choice | 2023-07-27T09:29:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- swag
metrics:
- accuracy
model-index:
- name: my_awesome_mc_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mc_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6809
- Accuracy: 0.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 35 | 0.8543 | 0.6714 |
| No log | 2.0 | 70 | 0.6696 | 0.7143 |
| No log | 3.0 | 105 | 0.6809 | 0.75 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
Dewa/dog_emotion_v3_resnet | Dewa | 2023-07-27T09:30:46Z | 241 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"resnet",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-07-27T08:53:19Z | ---
license: apache-2.0
base_model: microsoft/resnet-50
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dog_emotion_v3_resnet
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dog_emotion_v3_resnet
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3063
- Accuracy: 0.5075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 50 | 1.3721 | 0.3475 |
| No log | 2.0 | 100 | 1.3502 | 0.45 |
| No log | 3.0 | 150 | 1.3292 | 0.485 |
| No log | 4.0 | 200 | 1.3103 | 0.5025 |
| No log | 5.0 | 250 | 1.3063 | 0.5075 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
Arabic-Clip-Archive/bert-base-arabertv2-Vit-B-32 | Arabic-Clip-Archive | 2023-07-27T09:23:46Z | 39 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"ar",
"endpoints_compatible",
"region:us"
]
| null | 2023-07-26T06:19:56Z | ---
language: ar
---
<br />
<p align="center">
<h1 align="center">Swe-CLIP 500k</h1>
<p align="center">
<a href="https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/Swe-CLIP%20500k">Github Model Card</a>
</p>
</p>
## Usage
To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the [Multilingual-CLIP Github](https://github.com/FreddeFrallan/Multilingual-CLIP).
Once this is done, you can load and use the model with the following code
```python
from src import multilingual_clip
model = multilingual_clip.load_model('pain/bert-base-arabertv2-Vit-B-32')
embeddings = model(['Älgen är skogens konung!', 'Alla isbjörnar är vänsterhänta'])
print(embeddings.shape)
# Yields: torch.Size([2, 640])
```
<!-- ABOUT THE PROJECT -->
## About
A [KB/Bert-Swedish-Cased](https://huggingface.co/KB/bert-base-swedish-cased) tuned to match the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br>
Training data pairs was generated by sampling 500k sentences from the combined descriptions of [GCC](https://ai.google.com/research/ConceptualCaptions/) + [MSCOCO](https://cocodataset.org/#home) + [VizWiz](https://vizwiz.org/tasks-and-datasets/image-captioning/), and translating them into Swedish.
All translation was done using the [Huggingface Opus Model](https://huggingface.co/Helsinki-NLP/opus-mt-en-sv), which seemingly procudes higher quality translations than relying on the [AWS translate service](https://aws.amazon.com/translate/).
|
himanimaheshwari3/himani-text-imdb | himanimaheshwari3 | 2023-07-27T09:21:46Z | 64 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-07-27T09:20:05Z | ---
license: mit
base_model: gpt2
tags:
- generated_from_keras_callback
model-index:
- name: himanimaheshwari3/himani-text-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# himanimaheshwari3/himani-text-imdb
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 10.7148
- Validation Loss: 10.2666
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -947, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.7148 | 10.2666 | 0 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.0
- Tokenizers 0.13.3
|
Trong-Nghia/bert-large-uncased-detect-dep-v9 | Trong-Nghia | 2023-07-27T09:17:53Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-large-uncased",
"base_model:finetune:google-bert/bert-large-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-26T04:38:38Z | ---
license: apache-2.0
base_model: bert-large-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-large-uncased-detect-dep-v9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-detect-dep-v9
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5492
- Accuracy: 0.745
- F1: 0.8200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6247 | 1.0 | 1502 | 0.5405 | 0.748 | 0.8230 |
| 0.5825 | 2.0 | 3004 | 0.5492 | 0.745 | 0.8200 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
chunwoolee0/circulus-kobart-en-to-ko | chunwoolee0 | 2023-07-27T09:15:28Z | 108 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:circulus/kobart-trans-en-ko-v2",
"base_model:finetune:circulus/kobart-trans-en-ko-v2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2023-07-27T08:37:31Z | ---
base_model: circulus/kobart-trans-en-ko-v2
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: circulus-kobart-en-to-ko
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-ko
split: train
args: en-ko
metrics:
- name: Bleu
type: bleu
value: 2.6900397070648445
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# circulus-kobart-en-to-ko
This model is a fine-tuned version of [circulus/kobart-trans-en-ko-v2](https://huggingface.co/circulus/kobart-trans-en-ko-v2) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0986
- Bleu: 2.6900
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
lorenpe2/distiluse-base-multilingual-cased-v2 | lorenpe2 | 2023-07-27T09:13:27Z | 1,364 | 0 | sentence-transformers | [
"sentence-transformers",
"onnx",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"multilingual",
"arxiv:1908.10084",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-07-27T08:56:23Z | ---
pipeline_tag: sentence-similarity
language: multilingual
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ONNX convert of distiluse-base-multilingual-cased-v2
## Conversion of [sentence-transformers/distiluse-base-multilingual-cased-v2](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v2)
This is a [sentence-transformers](https://www.SBERT.net) ONNX model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. This custom model outputs `last_hidden_state` similar like original sentence-transformer implementation.
## Usage (HuggingFace Optimum)
Using this model becomes easy when you have [optimum](https://github.com/huggingface/optimum) installed:
```
python -m pip install optimum
```
You may also need following:
```
python -m pip install onnxruntime
python -m pip install onnx
```
Then you can use the model like this:
```python
from optimum.onnxruntime.modeling_ort import ORTModelForCustomTasks
model = ORTModelForCustomTasks.from_pretrained("lorenpe2/distiluse-base-multilingual-cased-v2")
tokenizer = AutoTokenizer.from_pretrained("lorenpe2/distiluse-base-multilingual-cased-v2")
inputs = tokenizer("I love burritos!", return_tensors="pt")
pred = model(**inputs)
```
You will also be able to leverage the pipeline API in transformers:
```python
from transformers import pipeline
onnx_extractor = pipeline("feature-extraction", model=model, tokenizer=tokenizer)
text = "I love burritos!"
pred = onnx_extractor(text)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/distiluse-base-multilingual-cased-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
NasimB/cbt-rarity-guten-fixed | NasimB | 2023-07-27T09:03:33Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-07-27T06:29:45Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-rarity-guten-fixed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-rarity-guten-fixed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3519 | 0.29 | 500 | 5.3468 |
| 5.0333 | 0.58 | 1000 | 4.9291 |
| 4.7073 | 0.87 | 1500 | 4.6889 |
| 4.4427 | 1.17 | 2000 | 4.5469 |
| 4.2897 | 1.46 | 2500 | 4.4291 |
| 4.1828 | 1.75 | 3000 | 4.3230 |
| 4.0733 | 2.04 | 3500 | 4.2457 |
| 3.8847 | 2.33 | 4000 | 4.2009 |
| 3.8597 | 2.62 | 4500 | 4.1478 |
| 3.8231 | 2.91 | 5000 | 4.0935 |
| 3.6322 | 3.21 | 5500 | 4.0913 |
| 3.5786 | 3.5 | 6000 | 4.0641 |
| 3.5646 | 3.79 | 6500 | 4.0290 |
| 3.477 | 4.08 | 7000 | 4.0268 |
| 3.3022 | 4.37 | 7500 | 4.0259 |
| 3.3082 | 4.66 | 8000 | 4.0106 |
| 3.2938 | 4.95 | 8500 | 3.9979 |
| 3.1532 | 5.24 | 9000 | 4.0100 |
| 3.1253 | 5.54 | 9500 | 4.0096 |
| 3.122 | 5.83 | 10000 | 4.0085 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
lixsh6/XLM-3B5-embedding | lixsh6 | 2023-07-27T09:03:12Z | 0 | 0 | null | [
"mteb",
"model-index",
"region:us"
]
| null | 2023-07-26T02:57:41Z | ---
tags:
- mteb
model-index:
- name: xlm3b5_step3len260_b128g8_lr1e-5
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 66.94029850746269
- type: ap
value: 28.832990644897478
- type: f1
value: 60.32686940828024
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 94.697425
- type: ap
value: 92.35377895045687
- type: f1
value: 94.6945423828739
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 51.586
- type: f1
value: 49.90891720350314
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.781
- type: map_at_10
value: 30.854
- type: map_at_100
value: 32.344
- type: map_at_1000
value: 32.364
- type: map_at_3
value: 25.711000000000002
- type: map_at_5
value: 28.254
- type: mrr_at_1
value: 18.563
- type: mrr_at_10
value: 31.137999999999998
- type: mrr_at_100
value: 32.621
- type: mrr_at_1000
value: 32.641
- type: mrr_at_3
value: 25.984
- type: mrr_at_5
value: 28.53
- type: ndcg_at_1
value: 17.781
- type: ndcg_at_10
value: 39.206
- type: ndcg_at_100
value: 45.751
- type: ndcg_at_1000
value: 46.225
- type: ndcg_at_3
value: 28.313
- type: ndcg_at_5
value: 32.919
- type: precision_at_1
value: 17.781
- type: precision_at_10
value: 6.65
- type: precision_at_100
value: 0.9560000000000001
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 11.949
- type: precision_at_5
value: 9.417
- type: recall_at_1
value: 17.781
- type: recall_at_10
value: 66.501
- type: recall_at_100
value: 95.59
- type: recall_at_1000
value: 99.21799999999999
- type: recall_at_3
value: 35.846000000000004
- type: recall_at_5
value: 47.083999999999996
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 44.44154312957711
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 34.189712542346385
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.72571219134687
- type: mrr
value: 76.3612979817966
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 83.62762841254953
- type: cos_sim_spearman
value: 80.72111639383013
- type: euclidean_pearson
value: 82.63506732956259
- type: euclidean_spearman
value: 81.177753304636
- type: manhattan_pearson
value: 82.5891836637346
- type: manhattan_spearman
value: 81.06811225217339
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 80.34090909090908
- type: f1
value: 79.4054298683183
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.82441952130262
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 32.132057843418416
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 34.23
- type: map_at_10
value: 46.763
- type: map_at_100
value: 48.454
- type: map_at_1000
value: 48.58
- type: map_at_3
value: 43.167
- type: map_at_5
value: 45.214
- type: mrr_at_1
value: 42.775
- type: mrr_at_10
value: 53.190000000000005
- type: mrr_at_100
value: 53.928
- type: mrr_at_1000
value: 53.964
- type: mrr_at_3
value: 51.168
- type: mrr_at_5
value: 52.434000000000005
- type: ndcg_at_1
value: 42.775
- type: ndcg_at_10
value: 53.376999999999995
- type: ndcg_at_100
value: 58.748
- type: ndcg_at_1000
value: 60.461
- type: ndcg_at_3
value: 48.929
- type: ndcg_at_5
value: 50.99399999999999
- type: precision_at_1
value: 42.775
- type: precision_at_10
value: 10.428999999999998
- type: precision_at_100
value: 1.678
- type: precision_at_1000
value: 0.215
- type: precision_at_3
value: 23.939
- type: precision_at_5
value: 17.082
- type: recall_at_1
value: 34.23
- type: recall_at_10
value: 64.96300000000001
- type: recall_at_100
value: 86.803
- type: recall_at_1000
value: 97.917
- type: recall_at_3
value: 51.815
- type: recall_at_5
value: 57.781000000000006
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.935
- type: map_at_10
value: 39.574999999999996
- type: map_at_100
value: 40.891
- type: map_at_1000
value: 41.043
- type: map_at_3
value: 36.248999999999995
- type: map_at_5
value: 38.157999999999994
- type: mrr_at_1
value: 36.624
- type: mrr_at_10
value: 45.241
- type: mrr_at_100
value: 46.028000000000006
- type: mrr_at_1000
value: 46.082
- type: mrr_at_3
value: 42.93
- type: mrr_at_5
value: 44.417
- type: ndcg_at_1
value: 36.624
- type: ndcg_at_10
value: 45.423
- type: ndcg_at_100
value: 49.971
- type: ndcg_at_1000
value: 52.382
- type: ndcg_at_3
value: 41.019
- type: ndcg_at_5
value: 43.254
- type: precision_at_1
value: 36.624
- type: precision_at_10
value: 8.86
- type: precision_at_100
value: 1.458
- type: precision_at_1000
value: 0.198
- type: precision_at_3
value: 20.276
- type: precision_at_5
value: 14.573
- type: recall_at_1
value: 28.935
- type: recall_at_10
value: 55.745999999999995
- type: recall_at_100
value: 74.977
- type: recall_at_1000
value: 90.505
- type: recall_at_3
value: 42.575
- type: recall_at_5
value: 48.902
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.828
- type: map_at_10
value: 50.888999999999996
- type: map_at_100
value: 52.001
- type: map_at_1000
value: 52.054
- type: map_at_3
value: 47.638999999999996
- type: map_at_5
value: 49.423
- type: mrr_at_1
value: 44.765
- type: mrr_at_10
value: 54.408
- type: mrr_at_100
value: 55.116
- type: mrr_at_1000
value: 55.144000000000005
- type: mrr_at_3
value: 52.038
- type: mrr_at_5
value: 53.323
- type: ndcg_at_1
value: 44.765
- type: ndcg_at_10
value: 56.724
- type: ndcg_at_100
value: 61.058
- type: ndcg_at_1000
value: 62.125
- type: ndcg_at_3
value: 51.324000000000005
- type: ndcg_at_5
value: 53.805
- type: precision_at_1
value: 44.765
- type: precision_at_10
value: 9.248000000000001
- type: precision_at_100
value: 1.234
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 23.093
- type: precision_at_5
value: 15.799
- type: recall_at_1
value: 38.828
- type: recall_at_10
value: 70.493
- type: recall_at_100
value: 89.293
- type: recall_at_1000
value: 96.872
- type: recall_at_3
value: 55.74400000000001
- type: recall_at_5
value: 61.95
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.085
- type: map_at_10
value: 30.070000000000004
- type: map_at_100
value: 31.206
- type: map_at_1000
value: 31.291999999999998
- type: map_at_3
value: 27.011000000000003
- type: map_at_5
value: 28.854999999999997
- type: mrr_at_1
value: 23.842
- type: mrr_at_10
value: 31.755
- type: mrr_at_100
value: 32.778
- type: mrr_at_1000
value: 32.845
- type: mrr_at_3
value: 28.851
- type: mrr_at_5
value: 30.574
- type: ndcg_at_1
value: 23.842
- type: ndcg_at_10
value: 35.052
- type: ndcg_at_100
value: 40.550999999999995
- type: ndcg_at_1000
value: 42.789
- type: ndcg_at_3
value: 29.096
- type: ndcg_at_5
value: 32.251000000000005
- type: precision_at_1
value: 23.842
- type: precision_at_10
value: 5.605
- type: precision_at_100
value: 0.877
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 12.316
- type: precision_at_5
value: 9.13
- type: recall_at_1
value: 22.085
- type: recall_at_10
value: 48.815999999999995
- type: recall_at_100
value: 74.039
- type: recall_at_1000
value: 90.872
- type: recall_at_3
value: 33.098
- type: recall_at_5
value: 40.647
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.088999999999999
- type: map_at_10
value: 21.526
- type: map_at_100
value: 22.832
- type: map_at_1000
value: 22.958000000000002
- type: map_at_3
value: 18.747
- type: map_at_5
value: 20.396
- type: mrr_at_1
value: 17.662
- type: mrr_at_10
value: 25.513
- type: mrr_at_100
value: 26.621
- type: mrr_at_1000
value: 26.698
- type: mrr_at_3
value: 22.658
- type: mrr_at_5
value: 24.449
- type: ndcg_at_1
value: 17.662
- type: ndcg_at_10
value: 26.506999999999998
- type: ndcg_at_100
value: 32.782
- type: ndcg_at_1000
value: 35.709999999999994
- type: ndcg_at_3
value: 21.279
- type: ndcg_at_5
value: 23.998
- type: precision_at_1
value: 17.662
- type: precision_at_10
value: 5.124
- type: precision_at_100
value: 0.951
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 10.323
- type: precision_at_5
value: 8.158999999999999
- type: recall_at_1
value: 14.088999999999999
- type: recall_at_10
value: 37.874
- type: recall_at_100
value: 65.34100000000001
- type: recall_at_1000
value: 86.06099999999999
- type: recall_at_3
value: 23.738999999999997
- type: recall_at_5
value: 30.359
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.75
- type: map_at_10
value: 34.156
- type: map_at_100
value: 35.638999999999996
- type: map_at_1000
value: 35.754999999999995
- type: map_at_3
value: 31.047000000000004
- type: map_at_5
value: 32.823
- type: mrr_at_1
value: 30.991000000000003
- type: mrr_at_10
value: 39.509
- type: mrr_at_100
value: 40.582
- type: mrr_at_1000
value: 40.636
- type: mrr_at_3
value: 37.103
- type: mrr_at_5
value: 38.503
- type: ndcg_at_1
value: 30.991000000000003
- type: ndcg_at_10
value: 39.719
- type: ndcg_at_100
value: 45.984
- type: ndcg_at_1000
value: 48.293
- type: ndcg_at_3
value: 34.92
- type: ndcg_at_5
value: 37.253
- type: precision_at_1
value: 30.991000000000003
- type: precision_at_10
value: 7.3340000000000005
- type: precision_at_100
value: 1.225
- type: precision_at_1000
value: 0.16
- type: precision_at_3
value: 16.586000000000002
- type: precision_at_5
value: 12.127
- type: recall_at_1
value: 24.75
- type: recall_at_10
value: 51.113
- type: recall_at_100
value: 77.338
- type: recall_at_1000
value: 92.764
- type: recall_at_3
value: 37.338
- type: recall_at_5
value: 43.437
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.158
- type: map_at_10
value: 32.877
- type: map_at_100
value: 34.226
- type: map_at_1000
value: 34.35
- type: map_at_3
value: 29.43
- type: map_at_5
value: 31.319000000000003
- type: mrr_at_1
value: 29.224
- type: mrr_at_10
value: 38.080000000000005
- type: mrr_at_100
value: 39.04
- type: mrr_at_1000
value: 39.097
- type: mrr_at_3
value: 35.407
- type: mrr_at_5
value: 36.771
- type: ndcg_at_1
value: 29.224
- type: ndcg_at_10
value: 38.805
- type: ndcg_at_100
value: 44.746
- type: ndcg_at_1000
value: 47.038000000000004
- type: ndcg_at_3
value: 33.269
- type: ndcg_at_5
value: 35.611
- type: precision_at_1
value: 29.224
- type: precision_at_10
value: 7.454
- type: precision_at_100
value: 1.221
- type: precision_at_1000
value: 0.16199999999999998
- type: precision_at_3
value: 16.134
- type: precision_at_5
value: 11.895
- type: recall_at_1
value: 23.158
- type: recall_at_10
value: 51.487
- type: recall_at_100
value: 77.464
- type: recall_at_1000
value: 92.525
- type: recall_at_3
value: 35.478
- type: recall_at_5
value: 41.722
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.456916666666668
- type: map_at_10
value: 33.5495
- type: map_at_100
value: 34.86808333333333
- type: map_at_1000
value: 34.98908333333333
- type: map_at_3
value: 30.59158333333334
- type: map_at_5
value: 32.24916666666667
- type: mrr_at_1
value: 29.387250000000005
- type: mrr_at_10
value: 37.73958333333333
- type: mrr_at_100
value: 38.6595
- type: mrr_at_1000
value: 38.718250000000005
- type: mrr_at_3
value: 35.31658333333333
- type: mrr_at_5
value: 36.69441666666667
- type: ndcg_at_1
value: 29.387250000000005
- type: ndcg_at_10
value: 38.910333333333334
- type: ndcg_at_100
value: 44.40241666666666
- type: ndcg_at_1000
value: 46.72008333333334
- type: ndcg_at_3
value: 34.045583333333326
- type: ndcg_at_5
value: 36.33725
- type: precision_at_1
value: 29.387250000000005
- type: precision_at_10
value: 7.034666666666668
- type: precision_at_100
value: 1.1698333333333333
- type: precision_at_1000
value: 0.15599999999999997
- type: precision_at_3
value: 15.866416666666666
- type: precision_at_5
value: 11.456333333333331
- type: recall_at_1
value: 24.456916666666668
- type: recall_at_10
value: 50.47758333333333
- type: recall_at_100
value: 74.52275
- type: recall_at_1000
value: 90.7105
- type: recall_at_3
value: 36.86275
- type: recall_at_5
value: 42.76533333333333
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.356
- type: map_at_10
value: 25.378
- type: map_at_100
value: 26.349
- type: map_at_1000
value: 26.451
- type: map_at_3
value: 23.403
- type: map_at_5
value: 24.614
- type: mrr_at_1
value: 22.086
- type: mrr_at_10
value: 28.072000000000003
- type: mrr_at_100
value: 28.887
- type: mrr_at_1000
value: 28.965999999999998
- type: mrr_at_3
value: 26.074
- type: mrr_at_5
value: 27.293
- type: ndcg_at_1
value: 22.086
- type: ndcg_at_10
value: 29.107
- type: ndcg_at_100
value: 34.0
- type: ndcg_at_1000
value: 36.793
- type: ndcg_at_3
value: 25.407999999999998
- type: ndcg_at_5
value: 27.375
- type: precision_at_1
value: 22.086
- type: precision_at_10
value: 4.678
- type: precision_at_100
value: 0.7779999999999999
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 10.992
- type: precision_at_5
value: 7.853000000000001
- type: recall_at_1
value: 19.356
- type: recall_at_10
value: 37.913999999999994
- type: recall_at_100
value: 60.507999999999996
- type: recall_at_1000
value: 81.459
- type: recall_at_3
value: 27.874
- type: recall_at_5
value: 32.688
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.008
- type: map_at_10
value: 22.431
- type: map_at_100
value: 23.61
- type: map_at_1000
value: 23.743
- type: map_at_3
value: 20.358
- type: map_at_5
value: 21.371000000000002
- type: mrr_at_1
value: 19.752
- type: mrr_at_10
value: 26.333000000000002
- type: mrr_at_100
value: 27.297
- type: mrr_at_1000
value: 27.378000000000004
- type: mrr_at_3
value: 24.358
- type: mrr_at_5
value: 25.354
- type: ndcg_at_1
value: 19.752
- type: ndcg_at_10
value: 26.712000000000003
- type: ndcg_at_100
value: 32.294
- type: ndcg_at_1000
value: 35.410000000000004
- type: ndcg_at_3
value: 22.974
- type: ndcg_at_5
value: 24.412
- type: precision_at_1
value: 19.752
- type: precision_at_10
value: 4.986
- type: precision_at_100
value: 0.924
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 10.966
- type: precision_at_5
value: 7.832
- type: recall_at_1
value: 16.008
- type: recall_at_10
value: 35.716
- type: recall_at_100
value: 60.76200000000001
- type: recall_at_1000
value: 83.204
- type: recall_at_3
value: 25.092
- type: recall_at_5
value: 28.858
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.743000000000002
- type: map_at_10
value: 34.492
- type: map_at_100
value: 35.716
- type: map_at_1000
value: 35.815999999999995
- type: map_at_3
value: 31.201
- type: map_at_5
value: 32.926
- type: mrr_at_1
value: 29.384
- type: mrr_at_10
value: 38.333
- type: mrr_at_100
value: 39.278
- type: mrr_at_1000
value: 39.330999999999996
- type: mrr_at_3
value: 35.65
- type: mrr_at_5
value: 36.947
- type: ndcg_at_1
value: 29.384
- type: ndcg_at_10
value: 40.195
- type: ndcg_at_100
value: 45.686
- type: ndcg_at_1000
value: 47.906
- type: ndcg_at_3
value: 34.477000000000004
- type: ndcg_at_5
value: 36.89
- type: precision_at_1
value: 29.384
- type: precision_at_10
value: 7.164
- type: precision_at_100
value: 1.111
- type: precision_at_1000
value: 0.13999999999999999
- type: precision_at_3
value: 15.983
- type: precision_at_5
value: 11.418000000000001
- type: recall_at_1
value: 24.743000000000002
- type: recall_at_10
value: 53.602000000000004
- type: recall_at_100
value: 77.266
- type: recall_at_1000
value: 92.857
- type: recall_at_3
value: 37.921
- type: recall_at_5
value: 44.124
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.531
- type: map_at_10
value: 35.933
- type: map_at_100
value: 37.913000000000004
- type: map_at_1000
value: 38.146
- type: map_at_3
value: 32.713
- type: map_at_5
value: 34.339999999999996
- type: mrr_at_1
value: 32.806000000000004
- type: mrr_at_10
value: 41.728
- type: mrr_at_100
value: 42.731
- type: mrr_at_1000
value: 42.777
- type: mrr_at_3
value: 39.065
- type: mrr_at_5
value: 40.467999999999996
- type: ndcg_at_1
value: 32.806000000000004
- type: ndcg_at_10
value: 42.254999999999995
- type: ndcg_at_100
value: 48.687999999999995
- type: ndcg_at_1000
value: 50.784
- type: ndcg_at_3
value: 37.330999999999996
- type: ndcg_at_5
value: 39.305
- type: precision_at_1
value: 32.806000000000004
- type: precision_at_10
value: 8.34
- type: precision_at_100
value: 1.7209999999999999
- type: precision_at_1000
value: 0.252
- type: precision_at_3
value: 17.589
- type: precision_at_5
value: 12.845999999999998
- type: recall_at_1
value: 26.531
- type: recall_at_10
value: 53.266000000000005
- type: recall_at_100
value: 81.49499999999999
- type: recall_at_1000
value: 94.506
- type: recall_at_3
value: 38.848
- type: recall_at_5
value: 44.263000000000005
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.77
- type: map_at_10
value: 28.504
- type: map_at_100
value: 29.580000000000002
- type: map_at_1000
value: 29.681
- type: map_at_3
value: 26.134
- type: map_at_5
value: 27.551
- type: mrr_at_1
value: 22.736
- type: mrr_at_10
value: 30.713
- type: mrr_at_100
value: 31.628
- type: mrr_at_1000
value: 31.701
- type: mrr_at_3
value: 28.497
- type: mrr_at_5
value: 29.799999999999997
- type: ndcg_at_1
value: 22.736
- type: ndcg_at_10
value: 33.048
- type: ndcg_at_100
value: 38.321
- type: ndcg_at_1000
value: 40.949999999999996
- type: ndcg_at_3
value: 28.521
- type: ndcg_at_5
value: 30.898999999999997
- type: precision_at_1
value: 22.736
- type: precision_at_10
value: 5.194
- type: precision_at_100
value: 0.86
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 12.2
- type: precision_at_5
value: 8.762
- type: recall_at_1
value: 20.77
- type: recall_at_10
value: 44.741
- type: recall_at_100
value: 68.987
- type: recall_at_1000
value: 88.984
- type: recall_at_3
value: 32.830999999999996
- type: recall_at_5
value: 38.452999999999996
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.646
- type: map_at_10
value: 17.432
- type: map_at_100
value: 19.347
- type: map_at_1000
value: 19.555
- type: map_at_3
value: 14.355
- type: map_at_5
value: 15.83
- type: mrr_at_1
value: 21.433
- type: mrr_at_10
value: 32.583
- type: mrr_at_100
value: 33.708
- type: mrr_at_1000
value: 33.751999999999995
- type: mrr_at_3
value: 28.979
- type: mrr_at_5
value: 30.979
- type: ndcg_at_1
value: 21.433
- type: ndcg_at_10
value: 25.025
- type: ndcg_at_100
value: 32.818999999999996
- type: ndcg_at_1000
value: 36.549
- type: ndcg_at_3
value: 19.689
- type: ndcg_at_5
value: 21.462
- type: precision_at_1
value: 21.433
- type: precision_at_10
value: 8.085
- type: precision_at_100
value: 1.6340000000000001
- type: precision_at_1000
value: 0.233
- type: precision_at_3
value: 14.832
- type: precision_at_5
value: 11.530999999999999
- type: recall_at_1
value: 9.646
- type: recall_at_10
value: 31.442999999999998
- type: recall_at_100
value: 58.48
- type: recall_at_1000
value: 79.253
- type: recall_at_3
value: 18.545
- type: recall_at_5
value: 23.362
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.48
- type: map_at_10
value: 18.127
- type: map_at_100
value: 25.563999999999997
- type: map_at_1000
value: 27.386
- type: map_at_3
value: 13.189
- type: map_at_5
value: 15.417
- type: mrr_at_1
value: 63.74999999999999
- type: mrr_at_10
value: 71.34899999999999
- type: mrr_at_100
value: 71.842
- type: mrr_at_1000
value: 71.851
- type: mrr_at_3
value: 69.167
- type: mrr_at_5
value: 70.479
- type: ndcg_at_1
value: 51.87500000000001
- type: ndcg_at_10
value: 38.792
- type: ndcg_at_100
value: 43.889
- type: ndcg_at_1000
value: 51.561
- type: ndcg_at_3
value: 42.686
- type: ndcg_at_5
value: 40.722
- type: precision_at_1
value: 63.74999999999999
- type: precision_at_10
value: 30.375000000000004
- type: precision_at_100
value: 10.103
- type: precision_at_1000
value: 2.257
- type: precision_at_3
value: 45.167
- type: precision_at_5
value: 38.95
- type: recall_at_1
value: 8.48
- type: recall_at_10
value: 23.008
- type: recall_at_100
value: 48.875
- type: recall_at_1000
value: 73.402
- type: recall_at_3
value: 14.377
- type: recall_at_5
value: 17.819
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 47.83
- type: f1
value: 41.76842531751529
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 62.247
- type: map_at_10
value: 72.782
- type: map_at_100
value: 73.095
- type: map_at_1000
value: 73.112
- type: map_at_3
value: 70.928
- type: map_at_5
value: 72.173
- type: mrr_at_1
value: 67.372
- type: mrr_at_10
value: 77.538
- type: mrr_at_100
value: 77.741
- type: mrr_at_1000
value: 77.74600000000001
- type: mrr_at_3
value: 75.938
- type: mrr_at_5
value: 77.054
- type: ndcg_at_1
value: 67.372
- type: ndcg_at_10
value: 78.001
- type: ndcg_at_100
value: 79.295
- type: ndcg_at_1000
value: 79.648
- type: ndcg_at_3
value: 74.71
- type: ndcg_at_5
value: 76.712
- type: precision_at_1
value: 67.372
- type: precision_at_10
value: 9.844999999999999
- type: precision_at_100
value: 1.065
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 29.308
- type: precision_at_5
value: 18.731
- type: recall_at_1
value: 62.247
- type: recall_at_10
value: 89.453
- type: recall_at_100
value: 94.998
- type: recall_at_1000
value: 97.385
- type: recall_at_3
value: 80.563
- type: recall_at_5
value: 85.58099999999999
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.587
- type: map_at_10
value: 37.316
- type: map_at_100
value: 39.542
- type: map_at_1000
value: 39.701
- type: map_at_3
value: 32.332
- type: map_at_5
value: 35.172
- type: mrr_at_1
value: 42.437999999999995
- type: mrr_at_10
value: 51.98500000000001
- type: mrr_at_100
value: 52.910999999999994
- type: mrr_at_1000
value: 52.944
- type: mrr_at_3
value: 49.691
- type: mrr_at_5
value: 51.15
- type: ndcg_at_1
value: 42.437999999999995
- type: ndcg_at_10
value: 45.016
- type: ndcg_at_100
value: 52.541000000000004
- type: ndcg_at_1000
value: 54.99699999999999
- type: ndcg_at_3
value: 41.175
- type: ndcg_at_5
value: 42.647
- type: precision_at_1
value: 42.437999999999995
- type: precision_at_10
value: 12.855
- type: precision_at_100
value: 2.049
- type: precision_at_1000
value: 0.247
- type: precision_at_3
value: 27.675
- type: precision_at_5
value: 20.617
- type: recall_at_1
value: 22.587
- type: recall_at_10
value: 51.547
- type: recall_at_100
value: 78.88
- type: recall_at_1000
value: 93.741
- type: recall_at_3
value: 37.256
- type: recall_at_5
value: 44.295
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.451
- type: map_at_10
value: 48.082
- type: map_at_100
value: 49.08
- type: map_at_1000
value: 49.163000000000004
- type: map_at_3
value: 44.766
- type: map_at_5
value: 46.722
- type: mrr_at_1
value: 64.902
- type: mrr_at_10
value: 72.195
- type: mrr_at_100
value: 72.572
- type: mrr_at_1000
value: 72.589
- type: mrr_at_3
value: 70.774
- type: mrr_at_5
value: 71.611
- type: ndcg_at_1
value: 64.902
- type: ndcg_at_10
value: 57.14399999999999
- type: ndcg_at_100
value: 60.916000000000004
- type: ndcg_at_1000
value: 62.649
- type: ndcg_at_3
value: 52.09
- type: ndcg_at_5
value: 54.70399999999999
- type: precision_at_1
value: 64.902
- type: precision_at_10
value: 12.136
- type: precision_at_100
value: 1.51
- type: precision_at_1000
value: 0.174
- type: precision_at_3
value: 32.933
- type: precision_at_5
value: 21.823
- type: recall_at_1
value: 32.451
- type: recall_at_10
value: 60.682
- type: recall_at_100
value: 75.523
- type: recall_at_1000
value: 87.063
- type: recall_at_3
value: 49.399
- type: recall_at_5
value: 54.55799999999999
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 89.6584
- type: ap
value: 85.36881978624284
- type: f1
value: 89.64170045393931
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 17.942
- type: map_at_10
value: 29.755
- type: map_at_100
value: 31.008000000000003
- type: map_at_1000
value: 31.067
- type: map_at_3
value: 25.959
- type: map_at_5
value: 28.044999999999998
- type: mrr_at_1
value: 18.467
- type: mrr_at_10
value: 30.253000000000004
- type: mrr_at_100
value: 31.461
- type: mrr_at_1000
value: 31.513
- type: mrr_at_3
value: 26.528000000000002
- type: mrr_at_5
value: 28.588
- type: ndcg_at_1
value: 18.467
- type: ndcg_at_10
value: 36.510999999999996
- type: ndcg_at_100
value: 42.748999999999995
- type: ndcg_at_1000
value: 44.188
- type: ndcg_at_3
value: 28.752
- type: ndcg_at_5
value: 32.462
- type: precision_at_1
value: 18.467
- type: precision_at_10
value: 6.006
- type: precision_at_100
value: 0.9169999999999999
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 12.55
- type: precision_at_5
value: 9.395000000000001
- type: recall_at_1
value: 17.942
- type: recall_at_10
value: 57.440000000000005
- type: recall_at_100
value: 86.66199999999999
- type: recall_at_1000
value: 97.613
- type: recall_at_3
value: 36.271
- type: recall_at_5
value: 45.167
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.76652986776104
- type: f1
value: 93.726741953801
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 67.79753761969903
- type: f1
value: 45.8547023848409
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.26563550773369
- type: f1
value: 67.37602000921103
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.51244115669132
- type: f1
value: 73.79891534060464
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.88016176143737
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 32.07643038274053
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.81344342001539
- type: mrr
value: 31.82078962760685
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.617
- type: map_at_10
value: 11.501
- type: map_at_100
value: 14.729999999999999
- type: map_at_1000
value: 16.209
- type: map_at_3
value: 8.275
- type: map_at_5
value: 9.853000000000002
- type: mrr_at_1
value: 41.486000000000004
- type: mrr_at_10
value: 51.471999999999994
- type: mrr_at_100
value: 52.020999999999994
- type: mrr_at_1000
value: 52.066
- type: mrr_at_3
value: 49.484
- type: mrr_at_5
value: 50.660000000000004
- type: ndcg_at_1
value: 38.854
- type: ndcg_at_10
value: 31.567
- type: ndcg_at_100
value: 29.842999999999996
- type: ndcg_at_1000
value: 38.995000000000005
- type: ndcg_at_3
value: 36.785000000000004
- type: ndcg_at_5
value: 34.955000000000005
- type: precision_at_1
value: 40.867
- type: precision_at_10
value: 23.591
- type: precision_at_100
value: 7.771
- type: precision_at_1000
value: 2.11
- type: precision_at_3
value: 35.397
- type: precision_at_5
value: 30.959999999999997
- type: recall_at_1
value: 4.617
- type: recall_at_10
value: 15.609
- type: recall_at_100
value: 31.313999999999997
- type: recall_at_1000
value: 63.085
- type: recall_at_3
value: 9.746
- type: recall_at_5
value: 12.295
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.797
- type: map_at_10
value: 44.822
- type: map_at_100
value: 45.891999999999996
- type: map_at_1000
value: 45.919
- type: map_at_3
value: 40.237
- type: map_at_5
value: 42.913000000000004
- type: mrr_at_1
value: 32.561
- type: mrr_at_10
value: 46.982
- type: mrr_at_100
value: 47.827
- type: mrr_at_1000
value: 47.843999999999994
- type: mrr_at_3
value: 43.26
- type: mrr_at_5
value: 45.527
- type: ndcg_at_1
value: 32.532
- type: ndcg_at_10
value: 52.832
- type: ndcg_at_100
value: 57.343999999999994
- type: ndcg_at_1000
value: 57.93899999999999
- type: ndcg_at_3
value: 44.246
- type: ndcg_at_5
value: 48.698
- type: precision_at_1
value: 32.532
- type: precision_at_10
value: 9.003
- type: precision_at_100
value: 1.1480000000000001
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 20.605999999999998
- type: precision_at_5
value: 14.954
- type: recall_at_1
value: 28.797
- type: recall_at_10
value: 75.065
- type: recall_at_100
value: 94.6
- type: recall_at_1000
value: 98.967
- type: recall_at_3
value: 52.742
- type: recall_at_5
value: 63.012
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 69.84700000000001
- type: map_at_10
value: 83.91499999999999
- type: map_at_100
value: 84.568
- type: map_at_1000
value: 84.584
- type: map_at_3
value: 80.87299999999999
- type: map_at_5
value: 82.76299999999999
- type: mrr_at_1
value: 80.4
- type: mrr_at_10
value: 86.843
- type: mrr_at_100
value: 86.956
- type: mrr_at_1000
value: 86.957
- type: mrr_at_3
value: 85.843
- type: mrr_at_5
value: 86.521
- type: ndcg_at_1
value: 80.4
- type: ndcg_at_10
value: 87.787
- type: ndcg_at_100
value: 89.039
- type: ndcg_at_1000
value: 89.137
- type: ndcg_at_3
value: 84.76700000000001
- type: ndcg_at_5
value: 86.413
- type: precision_at_1
value: 80.4
- type: precision_at_10
value: 13.391
- type: precision_at_100
value: 1.533
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.123
- type: precision_at_5
value: 24.462
- type: recall_at_1
value: 69.84700000000001
- type: recall_at_10
value: 95.296
- type: recall_at_100
value: 99.543
- type: recall_at_1000
value: 99.98700000000001
- type: recall_at_3
value: 86.75
- type: recall_at_5
value: 91.33099999999999
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 54.24501738730203
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 61.28243705082983
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.473
- type: map_at_10
value: 8.944
- type: map_at_100
value: 11.21
- type: map_at_1000
value: 11.601
- type: map_at_3
value: 6.167
- type: map_at_5
value: 7.438000000000001
- type: mrr_at_1
value: 17.1
- type: mrr_at_10
value: 26.487
- type: mrr_at_100
value: 27.888
- type: mrr_at_1000
value: 27.961000000000002
- type: mrr_at_3
value: 23.25
- type: mrr_at_5
value: 24.91
- type: ndcg_at_1
value: 17.1
- type: ndcg_at_10
value: 15.615000000000002
- type: ndcg_at_100
value: 24.667
- type: ndcg_at_1000
value: 31.467
- type: ndcg_at_3
value: 14.035
- type: ndcg_at_5
value: 12.443
- type: precision_at_1
value: 17.1
- type: precision_at_10
value: 8.4
- type: precision_at_100
value: 2.149
- type: precision_at_1000
value: 0.378
- type: precision_at_3
value: 13.200000000000001
- type: precision_at_5
value: 11.06
- type: recall_at_1
value: 3.473
- type: recall_at_10
value: 17.087
- type: recall_at_100
value: 43.641999999999996
- type: recall_at_1000
value: 76.7
- type: recall_at_3
value: 8.037999999999998
- type: recall_at_5
value: 11.232000000000001
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 86.07032781899852
- type: cos_sim_spearman
value: 81.86668245459153
- type: euclidean_pearson
value: 83.75572948495356
- type: euclidean_spearman
value: 81.88575221829207
- type: manhattan_pearson
value: 83.73171218997966
- type: manhattan_spearman
value: 81.85928771458329
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 80.29008828604368
- type: cos_sim_spearman
value: 70.7510437896188
- type: euclidean_pearson
value: 76.65867322096001
- type: euclidean_spearman
value: 70.53984435296805
- type: manhattan_pearson
value: 76.6398826461678
- type: manhattan_spearman
value: 70.55153706770477
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 83.55610063096913
- type: cos_sim_spearman
value: 84.36676850545378
- type: euclidean_pearson
value: 82.81438612985889
- type: euclidean_spearman
value: 84.182693686057
- type: manhattan_pearson
value: 82.8355239074719
- type: manhattan_spearman
value: 84.19280249146543
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 78.94275022740113
- type: cos_sim_spearman
value: 74.50851813226338
- type: euclidean_pearson
value: 77.30867917552419
- type: euclidean_spearman
value: 74.55661368823343
- type: manhattan_pearson
value: 77.31883134876524
- type: manhattan_spearman
value: 74.58999819014154
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 85.62907185533146
- type: cos_sim_spearman
value: 86.40667080261993
- type: euclidean_pearson
value: 85.15184748925726
- type: euclidean_spearman
value: 86.33853519247509
- type: manhattan_pearson
value: 85.21542426870172
- type: manhattan_spearman
value: 86.4076178438401
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.42449758804275
- type: cos_sim_spearman
value: 84.7411616479609
- type: euclidean_pearson
value: 83.56616729612806
- type: euclidean_spearman
value: 84.44493050289694
- type: manhattan_pearson
value: 83.50906591764574
- type: manhattan_spearman
value: 84.39704993090794
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.84843806728331
- type: cos_sim_spearman
value: 89.03139214250334
- type: euclidean_pearson
value: 89.63615835813032
- type: euclidean_spearman
value: 89.33022202130817
- type: manhattan_pearson
value: 89.67071925715891
- type: manhattan_spearman
value: 89.29339683171531
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 65.65559857216783
- type: cos_sim_spearman
value: 65.86805861979079
- type: euclidean_pearson
value: 66.69697475461513
- type: euclidean_spearman
value: 66.07735691378713
- type: manhattan_pearson
value: 66.63427637906918
- type: manhattan_spearman
value: 65.95720565040364
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 86.06435608928308
- type: cos_sim_spearman
value: 86.46139340079428
- type: euclidean_pearson
value: 86.4874804471064
- type: euclidean_spearman
value: 86.19390771731406
- type: manhattan_pearson
value: 86.51184704840284
- type: manhattan_spearman
value: 86.19094101171963
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 85.10723925640346
- type: mrr
value: 95.62579305226365
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 56.233
- type: map_at_10
value: 64.94
- type: map_at_100
value: 65.508
- type: map_at_1000
value: 65.537
- type: map_at_3
value: 62.121
- type: map_at_5
value: 63.92400000000001
- type: mrr_at_1
value: 58.667
- type: mrr_at_10
value: 66.352
- type: mrr_at_100
value: 66.751
- type: mrr_at_1000
value: 66.777
- type: mrr_at_3
value: 64.22200000000001
- type: mrr_at_5
value: 65.656
- type: ndcg_at_1
value: 58.667
- type: ndcg_at_10
value: 69.318
- type: ndcg_at_100
value: 71.822
- type: ndcg_at_1000
value: 72.578
- type: ndcg_at_3
value: 64.532
- type: ndcg_at_5
value: 67.292
- type: precision_at_1
value: 58.667
- type: precision_at_10
value: 9.133
- type: precision_at_100
value: 1.05
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 24.889
- type: precision_at_5
value: 16.733
- type: recall_at_1
value: 56.233
- type: recall_at_10
value: 81.206
- type: recall_at_100
value: 92.80000000000001
- type: recall_at_1000
value: 98.667
- type: recall_at_3
value: 68.672
- type: recall_at_5
value: 75.378
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.56336633663366
- type: cos_sim_ap
value: 86.13024319858586
- type: cos_sim_f1
value: 76.80157946692991
- type: cos_sim_precision
value: 75.82846003898635
- type: cos_sim_recall
value: 77.8
- type: dot_accuracy
value: 99.56336633663366
- type: dot_ap
value: 86.13028343072267
- type: dot_f1
value: 76.80157946692991
- type: dot_precision
value: 75.82846003898635
- type: dot_recall
value: 77.8
- type: euclidean_accuracy
value: 99.56336633663366
- type: euclidean_ap
value: 86.13029040641543
- type: euclidean_f1
value: 76.80157946692991
- type: euclidean_precision
value: 75.82846003898635
- type: euclidean_recall
value: 77.8
- type: manhattan_accuracy
value: 99.56534653465347
- type: manhattan_ap
value: 86.24817068330776
- type: manhattan_f1
value: 77.13580246913581
- type: manhattan_precision
value: 76.19512195121952
- type: manhattan_recall
value: 78.10000000000001
- type: max_accuracy
value: 99.56534653465347
- type: max_ap
value: 86.24817068330776
- type: max_f1
value: 77.13580246913581
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 64.69564559409538
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 34.23127531581388
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.845357053686975
- type: mrr
value: 50.59803656311009
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 29.02241691876377
- type: cos_sim_spearman
value: 29.017719340560923
- type: dot_pearson
value: 29.59373129445045
- type: dot_spearman
value: 29.616196388331968
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.157
- type: map_at_10
value: 0.9440000000000001
- type: map_at_100
value: 4.61
- type: map_at_1000
value: 11.488
- type: map_at_3
value: 0.396
- type: map_at_5
value: 0.569
- type: mrr_at_1
value: 57.99999999999999
- type: mrr_at_10
value: 71.672
- type: mrr_at_100
value: 71.707
- type: mrr_at_1000
value: 71.707
- type: mrr_at_3
value: 68.333
- type: mrr_at_5
value: 70.533
- type: ndcg_at_1
value: 54.0
- type: ndcg_at_10
value: 45.216
- type: ndcg_at_100
value: 32.623999999999995
- type: ndcg_at_1000
value: 33.006
- type: ndcg_at_3
value: 51.76500000000001
- type: ndcg_at_5
value: 47.888999999999996
- type: precision_at_1
value: 57.99999999999999
- type: precision_at_10
value: 48.0
- type: precision_at_100
value: 32.74
- type: precision_at_1000
value: 14.588000000000001
- type: precision_at_3
value: 55.333
- type: precision_at_5
value: 51.2
- type: recall_at_1
value: 0.157
- type: recall_at_10
value: 1.212
- type: recall_at_100
value: 7.868
- type: recall_at_1000
value: 31.583
- type: recall_at_3
value: 0.443
- type: recall_at_5
value: 0.6779999999999999
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.545
- type: map_at_10
value: 4.6690000000000005
- type: map_at_100
value: 8.982
- type: map_at_1000
value: 10.453999999999999
- type: map_at_3
value: 2.35
- type: map_at_5
value: 3.168
- type: mrr_at_1
value: 18.367
- type: mrr_at_10
value: 28.599999999999998
- type: mrr_at_100
value: 30.287
- type: mrr_at_1000
value: 30.339
- type: mrr_at_3
value: 24.490000000000002
- type: mrr_at_5
value: 27.040999999999997
- type: ndcg_at_1
value: 17.347
- type: ndcg_at_10
value: 13.868
- type: ndcg_at_100
value: 25.499
- type: ndcg_at_1000
value: 37.922
- type: ndcg_at_3
value: 13.746
- type: ndcg_at_5
value: 13.141
- type: precision_at_1
value: 18.367
- type: precision_at_10
value: 12.653
- type: precision_at_100
value: 5.776
- type: precision_at_1000
value: 1.3860000000000001
- type: precision_at_3
value: 13.605
- type: precision_at_5
value: 13.061
- type: recall_at_1
value: 1.545
- type: recall_at_10
value: 9.305
- type: recall_at_100
value: 38.084
- type: recall_at_1000
value: 75.897
- type: recall_at_3
value: 2.903
- type: recall_at_5
value: 4.8919999999999995
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.8454
- type: ap
value: 14.744783758537974
- type: f1
value: 54.86055534008869
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 58.71250707413695
- type: f1
value: 58.76581794782603
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.314744135178934
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 84.13899982118377
- type: cos_sim_ap
value: 68.03329474978145
- type: cos_sim_f1
value: 63.31192005710206
- type: cos_sim_precision
value: 57.6473136915078
- type: cos_sim_recall
value: 70.21108179419525
- type: dot_accuracy
value: 84.13899982118377
- type: dot_ap
value: 68.03324775052695
- type: dot_f1
value: 63.31192005710206
- type: dot_precision
value: 57.6473136915078
- type: dot_recall
value: 70.21108179419525
- type: euclidean_accuracy
value: 84.13899982118377
- type: euclidean_ap
value: 68.03331114508686
- type: euclidean_f1
value: 63.31192005710206
- type: euclidean_precision
value: 57.6473136915078
- type: euclidean_recall
value: 70.21108179419525
- type: manhattan_accuracy
value: 84.12111819753234
- type: manhattan_ap
value: 67.97378509663328
- type: manhattan_f1
value: 63.38468945594607
- type: manhattan_precision
value: 58.2779991146525
- type: manhattan_recall
value: 69.47229551451187
- type: max_accuracy
value: 84.13899982118377
- type: max_ap
value: 68.03331114508686
- type: max_f1
value: 63.38468945594607
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 87.68774013272791
- type: cos_sim_ap
value: 83.51733662214374
- type: cos_sim_f1
value: 75.82190771045259
- type: cos_sim_precision
value: 72.72341628959276
- type: cos_sim_recall
value: 79.19618109023713
- type: dot_accuracy
value: 87.68774013272791
- type: dot_ap
value: 83.5173527754126
- type: dot_f1
value: 75.82190771045259
- type: dot_precision
value: 72.72341628959276
- type: dot_recall
value: 79.19618109023713
- type: euclidean_accuracy
value: 87.68774013272791
- type: euclidean_ap
value: 83.51734651146224
- type: euclidean_f1
value: 75.82190771045259
- type: euclidean_precision
value: 72.72341628959276
- type: euclidean_recall
value: 79.19618109023713
- type: manhattan_accuracy
value: 87.67221640082276
- type: manhattan_ap
value: 83.51179463759505
- type: manhattan_f1
value: 75.76243980738361
- type: manhattan_precision
value: 71.99112590127565
- type: manhattan_recall
value: 79.95072374499537
- type: max_accuracy
value: 87.68774013272791
- type: max_ap
value: 83.5173527754126
- type: max_f1
value: 75.82190771045259
--- |
maije/llama2-qlora-finetunined-french | maije | 2023-07-27T09:02:46Z | 4 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-27T09:02:29Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
RAPHCVR/llama2-qlora-finetunined-french | RAPHCVR | 2023-07-27T08:58:09Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-27T08:58:05Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
ketong3906/my_awesome_opus_books_model | ketong3906 | 2023-07-27T08:53:26Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_books",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-07-27T08:41:48Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus_books
metrics:
- bleu
model-index:
- name: my_awesome_opus_books_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_books
type: opus_books
config: en-fr
split: train[:1000]
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 6.5252
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_books dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6364
- Bleu: 6.5252
- Gen Len: 17.395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 50 | 1.6402 | 6.3992 | 17.405 |
| No log | 2.0 | 100 | 1.6364 | 6.5252 | 17.395 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
Samalabama66/a2c-PandaReachDense-v2 | Samalabama66 | 2023-07-27T08:52:24Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-26T09:41:51Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.58 +/- 0.16
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
lixsh6/XLM-0B6-embedding | lixsh6 | 2023-07-27T08:43:45Z | 0 | 1 | null | [
"mteb",
"model-index",
"region:us"
]
| null | 2023-07-26T06:38:39Z | ---
tags:
- mteb
model-index:
- name: xlm_0b6_mixlang_newstep3
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 68.61194029850746
- type: ap
value: 30.653298301473487
- type: f1
value: 62.25241612666261
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.38145000000002
- type: ap
value: 90.31356902458496
- type: f1
value: 93.37421180090173
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 50.64400000000001
- type: f1
value: 48.975535848642295
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.777
- type: map_at_10
value: 32.274
- type: map_at_100
value: 33.652
- type: map_at_1000
value: 33.669
- type: map_at_3
value: 27.276
- type: map_at_5
value: 29.758000000000003
- type: mrr_at_1
value: 19.63
- type: mrr_at_10
value: 32.573
- type: mrr_at_100
value: 33.951
- type: mrr_at_1000
value: 33.967999999999996
- type: mrr_at_3
value: 27.608
- type: mrr_at_5
value: 30.047
- type: ndcg_at_1
value: 18.777
- type: ndcg_at_10
value: 40.774
- type: ndcg_at_100
value: 46.931
- type: ndcg_at_1000
value: 47.359
- type: ndcg_at_3
value: 30.213
- type: ndcg_at_5
value: 34.705999999999996
- type: precision_at_1
value: 18.777
- type: precision_at_10
value: 6.842
- type: precision_at_100
value: 0.959
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 12.921
- type: precision_at_5
value: 9.943
- type: recall_at_1
value: 18.777
- type: recall_at_10
value: 68.42099999999999
- type: recall_at_100
value: 95.946
- type: recall_at_1000
value: 99.289
- type: recall_at_3
value: 38.762
- type: recall_at_5
value: 49.716
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 45.53512209912995
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 38.432491784931464
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 61.11465519830743
- type: mrr
value: 74.41509475442992
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 82.1318467537697
- type: cos_sim_spearman
value: 80.25062374562512
- type: euclidean_pearson
value: 81.08228995090938
- type: euclidean_spearman
value: 80.25062374562512
- type: manhattan_pearson
value: 80.69075497902021
- type: manhattan_spearman
value: 79.63916402996817
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 78.50324675324674
- type: f1
value: 77.34014983227601
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.3047565513338
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 35.114800929695775
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.757
- type: map_at_10
value: 43.443
- type: map_at_100
value: 44.972
- type: map_at_1000
value: 45.092999999999996
- type: map_at_3
value: 39.566
- type: map_at_5
value: 41.628
- type: mrr_at_1
value: 39.485
- type: mrr_at_10
value: 49.597
- type: mrr_at_100
value: 50.275999999999996
- type: mrr_at_1000
value: 50.312999999999995
- type: mrr_at_3
value: 46.876
- type: mrr_at_5
value: 48.35
- type: ndcg_at_1
value: 39.485
- type: ndcg_at_10
value: 50.11600000000001
- type: ndcg_at_100
value: 55.469
- type: ndcg_at_1000
value: 57.253
- type: ndcg_at_3
value: 44.695
- type: ndcg_at_5
value: 46.963
- type: precision_at_1
value: 39.485
- type: precision_at_10
value: 9.8
- type: precision_at_100
value: 1.5789999999999997
- type: precision_at_1000
value: 0.20400000000000001
- type: precision_at_3
value: 21.793000000000003
- type: precision_at_5
value: 15.651000000000002
- type: recall_at_1
value: 31.757
- type: recall_at_10
value: 62.861
- type: recall_at_100
value: 85.09
- type: recall_at_1000
value: 96.54
- type: recall_at_3
value: 46.981
- type: recall_at_5
value: 53.488
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.616
- type: map_at_10
value: 33.999
- type: map_at_100
value: 35.299
- type: map_at_1000
value: 35.44
- type: map_at_3
value: 31.283
- type: map_at_5
value: 32.71
- type: mrr_at_1
value: 30.701
- type: mrr_at_10
value: 39.115
- type: mrr_at_100
value: 39.912
- type: mrr_at_1000
value: 39.963
- type: mrr_at_3
value: 36.975
- type: mrr_at_5
value: 38.118
- type: ndcg_at_1
value: 30.701
- type: ndcg_at_10
value: 39.454
- type: ndcg_at_100
value: 44.393
- type: ndcg_at_1000
value: 46.822
- type: ndcg_at_3
value: 35.317
- type: ndcg_at_5
value: 37.066
- type: precision_at_1
value: 30.701
- type: precision_at_10
value: 7.661999999999999
- type: precision_at_100
value: 1.308
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 17.346
- type: precision_at_5
value: 12.203999999999999
- type: recall_at_1
value: 24.616
- type: recall_at_10
value: 49.681
- type: recall_at_100
value: 70.729
- type: recall_at_1000
value: 86.361
- type: recall_at_3
value: 37.677
- type: recall_at_5
value: 42.713
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.11
- type: map_at_10
value: 47.619
- type: map_at_100
value: 48.758
- type: map_at_1000
value: 48.818
- type: map_at_3
value: 44.354
- type: map_at_5
value: 46.192
- type: mrr_at_1
value: 41.379
- type: mrr_at_10
value: 51.075
- type: mrr_at_100
value: 51.807
- type: mrr_at_1000
value: 51.842
- type: mrr_at_3
value: 48.464
- type: mrr_at_5
value: 49.944
- type: ndcg_at_1
value: 41.379
- type: ndcg_at_10
value: 53.510999999999996
- type: ndcg_at_100
value: 57.981
- type: ndcg_at_1000
value: 59.245999999999995
- type: ndcg_at_3
value: 47.915
- type: ndcg_at_5
value: 50.586
- type: precision_at_1
value: 41.379
- type: precision_at_10
value: 8.770999999999999
- type: precision_at_100
value: 1.193
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 21.587999999999997
- type: precision_at_5
value: 14.934
- type: recall_at_1
value: 36.11
- type: recall_at_10
value: 67.539
- type: recall_at_100
value: 86.803
- type: recall_at_1000
value: 95.889
- type: recall_at_3
value: 52.312999999999995
- type: recall_at_5
value: 58.967999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.831
- type: map_at_10
value: 24.314
- type: map_at_100
value: 25.374999999999996
- type: map_at_1000
value: 25.474000000000004
- type: map_at_3
value: 21.884
- type: map_at_5
value: 23.203
- type: mrr_at_1
value: 18.079
- type: mrr_at_10
value: 25.741000000000003
- type: mrr_at_100
value: 26.728
- type: mrr_at_1000
value: 26.808
- type: mrr_at_3
value: 23.39
- type: mrr_at_5
value: 24.684
- type: ndcg_at_1
value: 18.079
- type: ndcg_at_10
value: 28.738000000000003
- type: ndcg_at_100
value: 34.408
- type: ndcg_at_1000
value: 37.129
- type: ndcg_at_3
value: 23.921999999999997
- type: ndcg_at_5
value: 26.151000000000003
- type: precision_at_1
value: 18.079
- type: precision_at_10
value: 4.768
- type: precision_at_100
value: 0.8089999999999999
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 10.508000000000001
- type: precision_at_5
value: 7.661
- type: recall_at_1
value: 16.831
- type: recall_at_10
value: 40.967
- type: recall_at_100
value: 68.059
- type: recall_at_1000
value: 88.836
- type: recall_at_3
value: 27.927999999999997
- type: recall_at_5
value: 33.201
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.937000000000001
- type: map_at_10
value: 15.146
- type: map_at_100
value: 16.29
- type: map_at_1000
value: 16.441
- type: map_at_3
value: 13.014999999999999
- type: map_at_5
value: 14.088999999999999
- type: mrr_at_1
value: 11.193999999999999
- type: mrr_at_10
value: 18.199
- type: mrr_at_100
value: 19.278000000000002
- type: mrr_at_1000
value: 19.378
- type: mrr_at_3
value: 15.878999999999998
- type: mrr_at_5
value: 17.141000000000002
- type: ndcg_at_1
value: 11.193999999999999
- type: ndcg_at_10
value: 19.286
- type: ndcg_at_100
value: 25.291999999999998
- type: ndcg_at_1000
value: 29.012999999999998
- type: ndcg_at_3
value: 15.129999999999999
- type: ndcg_at_5
value: 16.926
- type: precision_at_1
value: 11.193999999999999
- type: precision_at_10
value: 3.918
- type: precision_at_100
value: 0.803
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 7.587000000000001
- type: precision_at_5
value: 5.8709999999999996
- type: recall_at_1
value: 8.937000000000001
- type: recall_at_10
value: 28.89
- type: recall_at_100
value: 56.12200000000001
- type: recall_at_1000
value: 82.749
- type: recall_at_3
value: 17.748
- type: recall_at_5
value: 22.042
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.559
- type: map_at_10
value: 28.77
- type: map_at_100
value: 30.144
- type: map_at_1000
value: 30.270999999999997
- type: map_at_3
value: 25.456
- type: map_at_5
value: 27.351999999999997
- type: mrr_at_1
value: 24.062
- type: mrr_at_10
value: 33.409
- type: mrr_at_100
value: 34.369
- type: mrr_at_1000
value: 34.434
- type: mrr_at_3
value: 30.574
- type: mrr_at_5
value: 32.287
- type: ndcg_at_1
value: 24.062
- type: ndcg_at_10
value: 34.537
- type: ndcg_at_100
value: 40.542
- type: ndcg_at_1000
value: 43.208999999999996
- type: ndcg_at_3
value: 29.032000000000004
- type: ndcg_at_5
value: 31.838
- type: precision_at_1
value: 24.062
- type: precision_at_10
value: 6.814000000000001
- type: precision_at_100
value: 1.167
- type: precision_at_1000
value: 0.161
- type: precision_at_3
value: 14.244000000000002
- type: precision_at_5
value: 10.837
- type: recall_at_1
value: 19.559
- type: recall_at_10
value: 47.175
- type: recall_at_100
value: 73.11
- type: recall_at_1000
value: 91.144
- type: recall_at_3
value: 31.895
- type: recall_at_5
value: 38.978
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.828
- type: map_at_10
value: 27.664
- type: map_at_100
value: 29.099999999999998
- type: map_at_1000
value: 29.220000000000002
- type: map_at_3
value: 24.779
- type: map_at_5
value: 26.227
- type: mrr_at_1
value: 23.744
- type: mrr_at_10
value: 32.11
- type: mrr_at_100
value: 33.152
- type: mrr_at_1000
value: 33.215
- type: mrr_at_3
value: 29.604000000000003
- type: mrr_at_5
value: 30.894
- type: ndcg_at_1
value: 23.744
- type: ndcg_at_10
value: 33.047
- type: ndcg_at_100
value: 39.354
- type: ndcg_at_1000
value: 41.967999999999996
- type: ndcg_at_3
value: 28.133999999999997
- type: ndcg_at_5
value: 30.097
- type: precision_at_1
value: 23.744
- type: precision_at_10
value: 6.381
- type: precision_at_100
value: 1.135
- type: precision_at_1000
value: 0.155
- type: precision_at_3
value: 13.699
- type: precision_at_5
value: 9.932
- type: recall_at_1
value: 18.828
- type: recall_at_10
value: 44.777
- type: recall_at_100
value: 72.02499999999999
- type: recall_at_1000
value: 89.883
- type: recall_at_3
value: 30.881999999999998
- type: recall_at_5
value: 36.15
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.89466666666667
- type: map_at_10
value: 28.13191666666667
- type: map_at_100
value: 29.374083333333335
- type: map_at_1000
value: 29.501999999999995
- type: map_at_3
value: 25.450666666666667
- type: map_at_5
value: 26.862083333333338
- type: mrr_at_1
value: 23.87775
- type: mrr_at_10
value: 31.796833333333336
- type: mrr_at_100
value: 32.70425
- type: mrr_at_1000
value: 32.774
- type: mrr_at_3
value: 29.411000000000005
- type: mrr_at_5
value: 30.71525
- type: ndcg_at_1
value: 23.87775
- type: ndcg_at_10
value: 33.14725
- type: ndcg_at_100
value: 38.63300000000001
- type: ndcg_at_1000
value: 41.29166666666668
- type: ndcg_at_3
value: 28.504250000000003
- type: ndcg_at_5
value: 30.546250000000004
- type: precision_at_1
value: 23.87775
- type: precision_at_10
value: 6.143166666666667
- type: precision_at_100
value: 1.0658333333333332
- type: precision_at_1000
value: 0.1495
- type: precision_at_3
value: 13.468083333333333
- type: precision_at_5
value: 9.763416666666664
- type: recall_at_1
value: 19.89466666666667
- type: recall_at_10
value: 44.33358333333333
- type: recall_at_100
value: 68.79966666666667
- type: recall_at_1000
value: 87.5325
- type: recall_at_3
value: 31.34816666666667
- type: recall_at_5
value: 36.612833333333334
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 11.779
- type: map_at_10
value: 16.581000000000003
- type: map_at_100
value: 17.374000000000002
- type: map_at_1000
value: 17.48
- type: map_at_3
value: 14.777000000000001
- type: map_at_5
value: 15.654000000000002
- type: mrr_at_1
value: 13.497
- type: mrr_at_10
value: 18.192
- type: mrr_at_100
value: 18.929000000000002
- type: mrr_at_1000
value: 19.014
- type: mrr_at_3
value: 16.488
- type: mrr_at_5
value: 17.285
- type: ndcg_at_1
value: 13.497
- type: ndcg_at_10
value: 19.676
- type: ndcg_at_100
value: 24.081
- type: ndcg_at_1000
value: 27.012000000000004
- type: ndcg_at_3
value: 16.179
- type: ndcg_at_5
value: 17.573
- type: precision_at_1
value: 13.497
- type: precision_at_10
value: 3.512
- type: precision_at_100
value: 0.632
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 7.362
- type: precision_at_5
value: 5.367999999999999
- type: recall_at_1
value: 11.779
- type: recall_at_10
value: 27.613
- type: recall_at_100
value: 48.829
- type: recall_at_1000
value: 71.025
- type: recall_at_3
value: 17.815
- type: recall_at_5
value: 21.279999999999998
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 11.181000000000001
- type: map_at_10
value: 16.724
- type: map_at_100
value: 17.806
- type: map_at_1000
value: 17.946
- type: map_at_3
value: 14.718
- type: map_at_5
value: 15.848
- type: mrr_at_1
value: 13.971
- type: mrr_at_10
value: 19.716
- type: mrr_at_100
value: 20.71
- type: mrr_at_1000
value: 20.804000000000002
- type: mrr_at_3
value: 17.727999999999998
- type: mrr_at_5
value: 18.862000000000002
- type: ndcg_at_1
value: 13.971
- type: ndcg_at_10
value: 20.531
- type: ndcg_at_100
value: 25.901000000000003
- type: ndcg_at_1000
value: 29.317999999999998
- type: ndcg_at_3
value: 16.828000000000003
- type: ndcg_at_5
value: 18.576
- type: precision_at_1
value: 13.971
- type: precision_at_10
value: 4.04
- type: precision_at_100
value: 0.803
- type: precision_at_1000
value: 0.129
- type: precision_at_3
value: 8.305
- type: precision_at_5
value: 6.29
- type: recall_at_1
value: 11.181000000000001
- type: recall_at_10
value: 29.042
- type: recall_at_100
value: 53.342
- type: recall_at_1000
value: 78.117
- type: recall_at_3
value: 18.804000000000002
- type: recall_at_5
value: 23.22
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.046
- type: map_at_10
value: 30.702
- type: map_at_100
value: 31.961000000000002
- type: map_at_1000
value: 32.077
- type: map_at_3
value: 28.083000000000002
- type: map_at_5
value: 29.391000000000002
- type: mrr_at_1
value: 27.239
- type: mrr_at_10
value: 34.472
- type: mrr_at_100
value: 35.485
- type: mrr_at_1000
value: 35.558
- type: mrr_at_3
value: 32.245000000000005
- type: mrr_at_5
value: 33.42
- type: ndcg_at_1
value: 27.239
- type: ndcg_at_10
value: 35.453
- type: ndcg_at_100
value: 41.347
- type: ndcg_at_1000
value: 43.986
- type: ndcg_at_3
value: 30.768
- type: ndcg_at_5
value: 32.694
- type: precision_at_1
value: 27.239
- type: precision_at_10
value: 6.138
- type: precision_at_100
value: 1.014
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 13.775
- type: precision_at_5
value: 9.776
- type: recall_at_1
value: 23.046
- type: recall_at_10
value: 46.178999999999995
- type: recall_at_100
value: 72.366
- type: recall_at_1000
value: 90.713
- type: recall_at_3
value: 33.214
- type: recall_at_5
value: 38.186
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.006999999999998
- type: map_at_10
value: 30.791
- type: map_at_100
value: 32.495000000000005
- type: map_at_1000
value: 32.731
- type: map_at_3
value: 27.738000000000003
- type: map_at_5
value: 29.115000000000002
- type: mrr_at_1
value: 27.47
- type: mrr_at_10
value: 36.355
- type: mrr_at_100
value: 37.207
- type: mrr_at_1000
value: 37.262
- type: mrr_at_3
value: 33.267
- type: mrr_at_5
value: 34.918
- type: ndcg_at_1
value: 27.47
- type: ndcg_at_10
value: 37.314
- type: ndcg_at_100
value: 43.228
- type: ndcg_at_1000
value: 45.789
- type: ndcg_at_3
value: 32.178000000000004
- type: ndcg_at_5
value: 34.082
- type: precision_at_1
value: 27.47
- type: precision_at_10
value: 7.5889999999999995
- type: precision_at_100
value: 1.587
- type: precision_at_1000
value: 0.245
- type: precision_at_3
value: 15.613
- type: precision_at_5
value: 11.501999999999999
- type: recall_at_1
value: 22.006999999999998
- type: recall_at_10
value: 49.811
- type: recall_at_100
value: 76.175
- type: recall_at_1000
value: 92.432
- type: recall_at_3
value: 34.445
- type: recall_at_5
value: 39.834
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.085
- type: map_at_10
value: 21.83
- type: map_at_100
value: 22.915
- type: map_at_1000
value: 23.033
- type: map_at_3
value: 19.755
- type: map_at_5
value: 20.936
- type: mrr_at_1
value: 15.712000000000002
- type: mrr_at_10
value: 23.581
- type: mrr_at_100
value: 24.598
- type: mrr_at_1000
value: 24.697
- type: mrr_at_3
value: 21.442
- type: mrr_at_5
value: 22.68
- type: ndcg_at_1
value: 15.712000000000002
- type: ndcg_at_10
value: 26.104
- type: ndcg_at_100
value: 31.6
- type: ndcg_at_1000
value: 34.755
- type: ndcg_at_3
value: 21.953
- type: ndcg_at_5
value: 24.003
- type: precision_at_1
value: 15.712000000000002
- type: precision_at_10
value: 4.324999999999999
- type: precision_at_100
value: 0.76
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 9.797
- type: precision_at_5
value: 7.135
- type: recall_at_1
value: 14.085
- type: recall_at_10
value: 37.468
- type: recall_at_100
value: 62.946000000000005
- type: recall_at_1000
value: 86.701
- type: recall_at_3
value: 26.476
- type: recall_at_5
value: 31.294
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.305
- type: map_at_10
value: 14.971
- type: map_at_100
value: 16.634999999999998
- type: map_at_1000
value: 16.842
- type: map_at_3
value: 12.281
- type: map_at_5
value: 13.608
- type: mrr_at_1
value: 18.958
- type: mrr_at_10
value: 29.104000000000003
- type: mrr_at_100
value: 30.198000000000004
- type: mrr_at_1000
value: 30.264999999999997
- type: mrr_at_3
value: 25.548
- type: mrr_at_5
value: 27.805999999999997
- type: ndcg_at_1
value: 18.958
- type: ndcg_at_10
value: 21.84
- type: ndcg_at_100
value: 28.871999999999996
- type: ndcg_at_1000
value: 32.868
- type: ndcg_at_3
value: 16.991
- type: ndcg_at_5
value: 18.859
- type: precision_at_1
value: 18.958
- type: precision_at_10
value: 7.002999999999999
- type: precision_at_100
value: 1.4409999999999998
- type: precision_at_1000
value: 0.218
- type: precision_at_3
value: 12.681999999999999
- type: precision_at_5
value: 10.176
- type: recall_at_1
value: 8.305
- type: recall_at_10
value: 27.492
- type: recall_at_100
value: 52.053000000000004
- type: recall_at_1000
value: 74.52600000000001
- type: recall_at_3
value: 15.931999999999999
- type: recall_at_5
value: 20.71
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 7.928
- type: map_at_10
value: 17.128
- type: map_at_100
value: 23.657
- type: map_at_1000
value: 25.28
- type: map_at_3
value: 12.623999999999999
- type: map_at_5
value: 14.536999999999999
- type: mrr_at_1
value: 60.25
- type: mrr_at_10
value: 70.391
- type: mrr_at_100
value: 70.87
- type: mrr_at_1000
value: 70.879
- type: mrr_at_3
value: 69.125
- type: mrr_at_5
value: 69.85
- type: ndcg_at_1
value: 49.75
- type: ndcg_at_10
value: 37.473
- type: ndcg_at_100
value: 41.569
- type: ndcg_at_1000
value: 49.318
- type: ndcg_at_3
value: 42.791000000000004
- type: ndcg_at_5
value: 39.568999999999996
- type: precision_at_1
value: 60.25
- type: precision_at_10
value: 29.4
- type: precision_at_100
value: 9.468
- type: precision_at_1000
value: 2.077
- type: precision_at_3
value: 46.417
- type: precision_at_5
value: 37.95
- type: recall_at_1
value: 7.928
- type: recall_at_10
value: 22.603
- type: recall_at_100
value: 47.193000000000005
- type: recall_at_1000
value: 71.346
- type: recall_at_3
value: 14.472
- type: recall_at_5
value: 17.485999999999997
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 46.37
- type: f1
value: 40.27549527082307
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.849999999999994
- type: map_at_10
value: 54.54
- type: map_at_100
value: 55.143
- type: map_at_1000
value: 55.16799999999999
- type: map_at_3
value: 51.318
- type: map_at_5
value: 53.403999999999996
- type: mrr_at_1
value: 43.984
- type: mrr_at_10
value: 58.07600000000001
- type: mrr_at_100
value: 58.605
- type: mrr_at_1000
value: 58.620000000000005
- type: mrr_at_3
value: 54.918
- type: mrr_at_5
value: 56.974999999999994
- type: ndcg_at_1
value: 43.984
- type: ndcg_at_10
value: 61.768
- type: ndcg_at_100
value: 64.42099999999999
- type: ndcg_at_1000
value: 64.97800000000001
- type: ndcg_at_3
value: 55.533
- type: ndcg_at_5
value: 59.14
- type: precision_at_1
value: 43.984
- type: precision_at_10
value: 8.822000000000001
- type: precision_at_100
value: 1.0250000000000001
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 23.172
- type: precision_at_5
value: 15.857
- type: recall_at_1
value: 40.849999999999994
- type: recall_at_10
value: 80.663
- type: recall_at_100
value: 92.29899999999999
- type: recall_at_1000
value: 96.233
- type: recall_at_3
value: 64.031
- type: recall_at_5
value: 72.764
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.852
- type: map_at_10
value: 31.392999999999997
- type: map_at_100
value: 33.324999999999996
- type: map_at_1000
value: 33.5
- type: map_at_3
value: 27.249000000000002
- type: map_at_5
value: 29.401
- type: mrr_at_1
value: 38.272
- type: mrr_at_10
value: 47.076
- type: mrr_at_100
value: 47.902
- type: mrr_at_1000
value: 47.942
- type: mrr_at_3
value: 44.624
- type: mrr_at_5
value: 46.098
- type: ndcg_at_1
value: 38.272
- type: ndcg_at_10
value: 39.214
- type: ndcg_at_100
value: 46.341
- type: ndcg_at_1000
value: 49.282
- type: ndcg_at_3
value: 35.757
- type: ndcg_at_5
value: 36.669000000000004
- type: precision_at_1
value: 38.272
- type: precision_at_10
value: 11.219
- type: precision_at_100
value: 1.8599999999999999
- type: precision_at_1000
value: 0.23800000000000002
- type: precision_at_3
value: 24.331
- type: precision_at_5
value: 17.87
- type: recall_at_1
value: 18.852
- type: recall_at_10
value: 46.078
- type: recall_at_100
value: 72.898
- type: recall_at_1000
value: 90.644
- type: recall_at_3
value: 32.221
- type: recall_at_5
value: 37.894
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.714
- type: map_at_10
value: 46.743
- type: map_at_100
value: 47.64
- type: map_at_1000
value: 47.717999999999996
- type: map_at_3
value: 43.872
- type: map_at_5
value: 45.629
- type: mrr_at_1
value: 65.429
- type: mrr_at_10
value: 72.507
- type: mrr_at_100
value: 72.80799999999999
- type: mrr_at_1000
value: 72.82600000000001
- type: mrr_at_3
value: 70.98100000000001
- type: mrr_at_5
value: 71.967
- type: ndcg_at_1
value: 65.429
- type: ndcg_at_10
value: 55.84
- type: ndcg_at_100
value: 59.183
- type: ndcg_at_1000
value: 60.81100000000001
- type: ndcg_at_3
value: 51.327
- type: ndcg_at_5
value: 53.803
- type: precision_at_1
value: 65.429
- type: precision_at_10
value: 11.620999999999999
- type: precision_at_100
value: 1.425
- type: precision_at_1000
value: 0.164
- type: precision_at_3
value: 32.077
- type: precision_at_5
value: 21.199
- type: recall_at_1
value: 32.714
- type: recall_at_10
value: 58.103
- type: recall_at_100
value: 71.269
- type: recall_at_1000
value: 82.073
- type: recall_at_3
value: 48.116
- type: recall_at_5
value: 52.998
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 88.5384
- type: ap
value: 84.07244605493386
- type: f1
value: 88.51724847689141
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 17.169999999999998
- type: map_at_10
value: 28.601
- type: map_at_100
value: 29.869
- type: map_at_1000
value: 29.929
- type: map_at_3
value: 24.69
- type: map_at_5
value: 26.929
- type: mrr_at_1
value: 17.622
- type: mrr_at_10
value: 29.079
- type: mrr_at_100
value: 30.301000000000002
- type: mrr_at_1000
value: 30.354
- type: mrr_at_3
value: 25.232
- type: mrr_at_5
value: 27.458
- type: ndcg_at_1
value: 17.622
- type: ndcg_at_10
value: 35.357
- type: ndcg_at_100
value: 41.623
- type: ndcg_at_1000
value: 43.119
- type: ndcg_at_3
value: 27.344
- type: ndcg_at_5
value: 31.367
- type: precision_at_1
value: 17.622
- type: precision_at_10
value: 5.891
- type: precision_at_100
value: 0.9039999999999999
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 11.91
- type: precision_at_5
value: 9.189
- type: recall_at_1
value: 17.169999999999998
- type: recall_at_10
value: 56.369
- type: recall_at_100
value: 85.649
- type: recall_at_1000
value: 97.096
- type: recall_at_3
value: 34.499
- type: recall_at_5
value: 44.194
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 90.4810761513908
- type: f1
value: 90.43983880684412
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 59.824441404468764
- type: f1
value: 41.140870725364245
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.23940820443846
- type: f1
value: 63.866444501622254
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.98251513113652
- type: f1
value: 72.26944666028224
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 34.7972586123168
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 32.77986542120405
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 28.827020967264875
- type: mrr
value: 29.491954633310463
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.099
- type: map_at_10
value: 11.205
- type: map_at_100
value: 14.533999999999999
- type: map_at_1000
value: 16.012999999999998
- type: map_at_3
value: 8.074
- type: map_at_5
value: 9.515
- type: mrr_at_1
value: 43.034
- type: mrr_at_10
value: 50.903
- type: mrr_at_100
value: 51.62
- type: mrr_at_1000
value: 51.661
- type: mrr_at_3
value: 48.71
- type: mrr_at_5
value: 49.886
- type: ndcg_at_1
value: 39.938
- type: ndcg_at_10
value: 31.572
- type: ndcg_at_100
value: 29.652
- type: ndcg_at_1000
value: 38.971000000000004
- type: ndcg_at_3
value: 36.758
- type: ndcg_at_5
value: 34.481
- type: precision_at_1
value: 42.105
- type: precision_at_10
value: 24.056
- type: precision_at_100
value: 7.666
- type: precision_at_1000
value: 2.11
- type: precision_at_3
value: 35.088
- type: precision_at_5
value: 30.402
- type: recall_at_1
value: 5.099
- type: recall_at_10
value: 14.780999999999999
- type: recall_at_100
value: 31.653
- type: recall_at_1000
value: 63.724000000000004
- type: recall_at_3
value: 8.933
- type: recall_at_5
value: 11.413
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.232
- type: map_at_10
value: 39.704
- type: map_at_100
value: 40.93
- type: map_at_1000
value: 40.963
- type: map_at_3
value: 34.882999999999996
- type: map_at_5
value: 37.597
- type: mrr_at_1
value: 28.853
- type: mrr_at_10
value: 42.218
- type: mrr_at_100
value: 43.179
- type: mrr_at_1000
value: 43.202
- type: mrr_at_3
value: 38.157000000000004
- type: mrr_at_5
value: 40.483000000000004
- type: ndcg_at_1
value: 28.823999999999998
- type: ndcg_at_10
value: 47.729
- type: ndcg_at_100
value: 52.898999999999994
- type: ndcg_at_1000
value: 53.686
- type: ndcg_at_3
value: 38.548
- type: ndcg_at_5
value: 43.119
- type: precision_at_1
value: 28.823999999999998
- type: precision_at_10
value: 8.34
- type: precision_at_100
value: 1.1199999999999999
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 17.922
- type: precision_at_5
value: 13.331000000000001
- type: recall_at_1
value: 25.232
- type: recall_at_10
value: 69.95
- type: recall_at_100
value: 92.333
- type: recall_at_1000
value: 98.218
- type: recall_at_3
value: 45.946999999999996
- type: recall_at_5
value: 56.598000000000006
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.083
- type: map_at_10
value: 84.16
- type: map_at_100
value: 84.807
- type: map_at_1000
value: 84.822
- type: map_at_3
value: 81.181
- type: map_at_5
value: 83.094
- type: mrr_at_1
value: 80.83
- type: mrr_at_10
value: 87.173
- type: mrr_at_100
value: 87.28399999999999
- type: mrr_at_1000
value: 87.285
- type: mrr_at_3
value: 86.21
- type: mrr_at_5
value: 86.886
- type: ndcg_at_1
value: 80.85
- type: ndcg_at_10
value: 87.96199999999999
- type: ndcg_at_100
value: 89.225
- type: ndcg_at_1000
value: 89.32900000000001
- type: ndcg_at_3
value: 85.101
- type: ndcg_at_5
value: 86.74
- type: precision_at_1
value: 80.85
- type: precision_at_10
value: 13.378
- type: precision_at_100
value: 1.5310000000000001
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.269999999999996
- type: precision_at_5
value: 24.568
- type: recall_at_1
value: 70.083
- type: recall_at_10
value: 95.194
- type: recall_at_100
value: 99.51100000000001
- type: recall_at_1000
value: 99.991
- type: recall_at_3
value: 87.027
- type: recall_at_5
value: 91.604
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 49.23995527989351
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 58.81838285815132
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.463
- type: map_at_10
value: 11.387
- type: map_at_100
value: 13.621
- type: map_at_1000
value: 13.982
- type: map_at_3
value: 8.022
- type: map_at_5
value: 9.464
- type: mrr_at_1
value: 22.0
- type: mrr_at_10
value: 32.902
- type: mrr_at_100
value: 34.036
- type: mrr_at_1000
value: 34.093
- type: mrr_at_3
value: 29.317
- type: mrr_at_5
value: 31.141999999999996
- type: ndcg_at_1
value: 22.0
- type: ndcg_at_10
value: 19.483
- type: ndcg_at_100
value: 28.118
- type: ndcg_at_1000
value: 34.355999999999995
- type: ndcg_at_3
value: 18.032999999999998
- type: ndcg_at_5
value: 15.613
- type: precision_at_1
value: 22.0
- type: precision_at_10
value: 10.35
- type: precision_at_100
value: 2.282
- type: precision_at_1000
value: 0.378
- type: precision_at_3
value: 16.967
- type: precision_at_5
value: 13.719999999999999
- type: recall_at_1
value: 4.463
- type: recall_at_10
value: 20.963
- type: recall_at_100
value: 46.322
- type: recall_at_1000
value: 76.713
- type: recall_at_3
value: 10.308
- type: recall_at_5
value: 13.888
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.84563850617418
- type: cos_sim_spearman
value: 79.68400149970968
- type: euclidean_pearson
value: 82.75837054306935
- type: euclidean_spearman
value: 79.6840247099308
- type: manhattan_pearson
value: 82.73540970661433
- type: manhattan_spearman
value: 79.66844192381396
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 77.81430060207765
- type: cos_sim_spearman
value: 69.94012785669503
- type: euclidean_pearson
value: 74.59541033717807
- type: euclidean_spearman
value: 69.94010426360558
- type: manhattan_pearson
value: 74.56400760328428
- type: manhattan_spearman
value: 69.92806341709132
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 74.81511131302516
- type: cos_sim_spearman
value: 79.62625737683277
- type: euclidean_pearson
value: 77.45706601071352
- type: euclidean_spearman
value: 79.62625730605384
- type: manhattan_pearson
value: 77.3334919461798
- type: manhattan_spearman
value: 79.46650568750321
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 73.43273002333167
- type: cos_sim_spearman
value: 71.34169412319034
- type: euclidean_pearson
value: 73.58628382548541
- type: euclidean_spearman
value: 71.3417253984979
- type: manhattan_pearson
value: 73.528660458135
- type: manhattan_spearman
value: 71.29492315680972
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 79.7528032458892
- type: cos_sim_spearman
value: 82.80881645241301
- type: euclidean_pearson
value: 81.49065539033161
- type: euclidean_spearman
value: 82.80881911292607
- type: manhattan_pearson
value: 81.48964007971324
- type: manhattan_spearman
value: 82.82325035979333
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 77.46090733936299
- type: cos_sim_spearman
value: 82.65342321085096
- type: euclidean_pearson
value: 81.6531230438912
- type: euclidean_spearman
value: 82.65342321085096
- type: manhattan_pearson
value: 81.6092667285348
- type: manhattan_spearman
value: 82.63811888178375
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.36545028139912
- type: cos_sim_spearman
value: 88.8877047117119
- type: euclidean_pearson
value: 89.26155338214109
- type: euclidean_spearman
value: 88.8877047117119
- type: manhattan_pearson
value: 89.18322803188939
- type: manhattan_spearman
value: 88.74063459127103
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 68.11778566972097
- type: cos_sim_spearman
value: 68.4773054255333
- type: euclidean_pearson
value: 69.06680343994812
- type: euclidean_spearman
value: 68.4773054255333
- type: manhattan_pearson
value: 68.866622017307
- type: manhattan_spearman
value: 68.15156375349754
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.64200346870874
- type: cos_sim_spearman
value: 86.5043271353841
- type: euclidean_pearson
value: 86.36114472174944
- type: euclidean_spearman
value: 86.50433264867542
- type: manhattan_pearson
value: 86.29057032602698
- type: manhattan_spearman
value: 86.45171993846006
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 85.9286721127671
- type: mrr
value: 95.76535029966404
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 53.067
- type: map_at_10
value: 63.580000000000005
- type: map_at_100
value: 64.238
- type: map_at_1000
value: 64.265
- type: map_at_3
value: 60.402
- type: map_at_5
value: 62.456999999999994
- type: mrr_at_1
value: 55.667
- type: mrr_at_10
value: 64.566
- type: mrr_at_100
value: 65.054
- type: mrr_at_1000
value: 65.08
- type: mrr_at_3
value: 61.944
- type: mrr_at_5
value: 63.761
- type: ndcg_at_1
value: 55.667
- type: ndcg_at_10
value: 68.354
- type: ndcg_at_100
value: 70.94
- type: ndcg_at_1000
value: 71.759
- type: ndcg_at_3
value: 62.814
- type: ndcg_at_5
value: 66.084
- type: precision_at_1
value: 55.667
- type: precision_at_10
value: 9.232999999999999
- type: precision_at_100
value: 1.06
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 24.444
- type: precision_at_5
value: 16.667
- type: recall_at_1
value: 53.067
- type: recall_at_10
value: 81.89999999999999
- type: recall_at_100
value: 93.0
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 67.589
- type: recall_at_5
value: 75.506
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.61287128712871
- type: cos_sim_ap
value: 88.21320824985605
- type: cos_sim_f1
value: 80.15451472718492
- type: cos_sim_precision
value: 77.49766573295986
- type: cos_sim_recall
value: 83.0
- type: dot_accuracy
value: 99.61287128712871
- type: dot_ap
value: 88.21329368452164
- type: dot_f1
value: 80.15451472718492
- type: dot_precision
value: 77.49766573295986
- type: dot_recall
value: 83.0
- type: euclidean_accuracy
value: 99.61287128712871
- type: euclidean_ap
value: 88.21328696557586
- type: euclidean_f1
value: 80.15451472718492
- type: euclidean_precision
value: 77.49766573295986
- type: euclidean_recall
value: 83.0
- type: manhattan_accuracy
value: 99.61287128712871
- type: manhattan_ap
value: 88.26324850748259
- type: manhattan_f1
value: 80.36839554047503
- type: manhattan_precision
value: 77.9868297271872
- type: manhattan_recall
value: 82.89999999999999
- type: max_accuracy
value: 99.61287128712871
- type: max_ap
value: 88.26324850748259
- type: max_f1
value: 80.36839554047503
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 58.88814718001269
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.6023610692526
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 46.52388882316049
- type: mrr
value: 46.98781406501995
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 27.06710433803873
- type: cos_sim_spearman
value: 30.251609255580625
- type: dot_pearson
value: 27.0671067449827
- type: dot_spearman
value: 30.251609255580625
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.16999999999999998
- type: map_at_10
value: 1.204
- type: map_at_100
value: 6.800000000000001
- type: map_at_1000
value: 16.753999999999998
- type: map_at_3
value: 0.441
- type: map_at_5
value: 0.692
- type: mrr_at_1
value: 64.0
- type: mrr_at_10
value: 75.5
- type: mrr_at_100
value: 75.667
- type: mrr_at_1000
value: 75.667
- type: mrr_at_3
value: 72.333
- type: mrr_at_5
value: 74.63300000000001
- type: ndcg_at_1
value: 60.0
- type: ndcg_at_10
value: 55.074
- type: ndcg_at_100
value: 43.342999999999996
- type: ndcg_at_1000
value: 40.217999999999996
- type: ndcg_at_3
value: 56.754000000000005
- type: ndcg_at_5
value: 56.267999999999994
- type: precision_at_1
value: 64.0
- type: precision_at_10
value: 57.8
- type: precision_at_100
value: 44.34
- type: precision_at_1000
value: 17.791999999999998
- type: precision_at_3
value: 59.333000000000006
- type: precision_at_5
value: 59.199999999999996
- type: recall_at_1
value: 0.16999999999999998
- type: recall_at_10
value: 1.522
- type: recall_at_100
value: 10.52
- type: recall_at_1000
value: 38.324999999999996
- type: recall_at_3
value: 0.48
- type: recall_at_5
value: 0.792
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.078
- type: map_at_10
value: 5.463
- type: map_at_100
value: 9.914000000000001
- type: map_at_1000
value: 11.285
- type: map_at_3
value: 2.467
- type: map_at_5
value: 3.277
- type: mrr_at_1
value: 12.245000000000001
- type: mrr_at_10
value: 26.708
- type: mrr_at_100
value: 28.303
- type: mrr_at_1000
value: 28.321
- type: mrr_at_3
value: 23.128999999999998
- type: mrr_at_5
value: 24.558
- type: ndcg_at_1
value: 11.224
- type: ndcg_at_10
value: 15.221000000000002
- type: ndcg_at_100
value: 26.346999999999998
- type: ndcg_at_1000
value: 37.969
- type: ndcg_at_3
value: 13.318
- type: ndcg_at_5
value: 12.576
- type: precision_at_1
value: 12.245000000000001
- type: precision_at_10
value: 15.101999999999999
- type: precision_at_100
value: 5.9799999999999995
- type: precision_at_1000
value: 1.367
- type: precision_at_3
value: 14.966
- type: precision_at_5
value: 13.469000000000001
- type: recall_at_1
value: 1.078
- type: recall_at_10
value: 11.157
- type: recall_at_100
value: 38.190000000000005
- type: recall_at_1000
value: 73.831
- type: recall_at_3
value: 3.598
- type: recall_at_5
value: 5.122999999999999
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.1582
- type: ap
value: 14.92669801560963
- type: f1
value: 55.12856312799308
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 58.88511601584606
- type: f1
value: 58.85264576560652
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 46.12909899358978
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 83.26876080348096
- type: cos_sim_ap
value: 64.7970240303098
- type: cos_sim_f1
value: 60.64945026847354
- type: cos_sim_precision
value: 58.82936507936508
- type: cos_sim_recall
value: 62.58575197889182
- type: dot_accuracy
value: 83.26876080348096
- type: dot_ap
value: 64.7970187478589
- type: dot_f1
value: 60.64945026847354
- type: dot_precision
value: 58.82936507936508
- type: dot_recall
value: 62.58575197889182
- type: euclidean_accuracy
value: 83.26876080348096
- type: euclidean_ap
value: 64.7970350594888
- type: euclidean_f1
value: 60.64945026847354
- type: euclidean_precision
value: 58.82936507936508
- type: euclidean_recall
value: 62.58575197889182
- type: manhattan_accuracy
value: 83.22703701496096
- type: manhattan_ap
value: 64.77489173378227
- type: manhattan_f1
value: 60.60833646263612
- type: manhattan_precision
value: 57.65658490116694
- type: manhattan_recall
value: 63.87862796833773
- type: max_accuracy
value: 83.26876080348096
- type: max_ap
value: 64.7970350594888
- type: max_f1
value: 60.64945026847354
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 86.43613924787519
- type: cos_sim_ap
value: 80.48760161140632
- type: cos_sim_f1
value: 73.17976287962401
- type: cos_sim_precision
value: 68.0641102059739
- type: cos_sim_recall
value: 79.12688635663689
- type: dot_accuracy
value: 86.43613924787519
- type: dot_ap
value: 80.487599095952
- type: dot_f1
value: 73.17976287962401
- type: dot_precision
value: 68.0641102059739
- type: dot_recall
value: 79.12688635663689
- type: euclidean_accuracy
value: 86.43613924787519
- type: euclidean_ap
value: 80.48760636334994
- type: euclidean_f1
value: 73.17976287962401
- type: euclidean_precision
value: 68.0641102059739
- type: euclidean_recall
value: 79.12688635663689
- type: manhattan_accuracy
value: 86.41673458299375
- type: manhattan_ap
value: 80.47462765492928
- type: manhattan_f1
value: 73.16093396936981
- type: manhattan_precision
value: 68.48183710468005
- type: manhattan_recall
value: 78.5263319987681
- type: max_accuracy
value: 86.43613924787519
- type: max_ap
value: 80.48760636334994
- type: max_f1
value: 73.17976287962401
--- |
kaikaikaikaikaikaikaikai/marian-finetuned-kftt-ja-to-en | kaikaikaikaikaikaikaikai | 2023-07-27T08:28:04Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kftt",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2023-07-20T03:04:20Z | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kftt
metrics:
- bleu
model-index:
- name: marian-finetuned-kftt-ja-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kftt
type: kftt
config: en-ja
split: validation
args: en-ja
metrics:
- name: Bleu
type: bleu
value: 19.353560365370512
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kftt-ja-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on the kftt dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9124
- Bleu: 19.3536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu117
- Datasets 2.3.2
- Tokenizers 0.13.3
|
bochen0909/Pixelcopter-PLE-v0 | bochen0909 | 2023-07-27T08:27:37Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-27T02:33:24Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 46.00 +/- 34.32
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
sm136599/chatfoodie-koalpaca-polyglot-5.8b-20step | sm136599 | 2023-07-27T08:18:02Z | 2 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-27T08:17:56Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
zpdeaccount/old-bart-finetuned-pressrelease | zpdeaccount | 2023-07-27T08:16:06Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2023-07-27T08:00:09Z | ---
pipeline_tag: summarization
--- |
zpdeaccount/bart-finetuned-pressrelease | zpdeaccount | 2023-07-27T08:15:53Z | 115 | 1 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2023-07-27T07:57:20Z | ---
pipeline_tag: summarization
--- |
fadliaulawi/distilbert-base-uncased-finetuned-squad-d5716d28 | fadliaulawi | 2023-07-27T08:13:22Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-07-27T08:11:14Z | ---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
jayantdocplix/blokeAI-13b | jayantdocplix | 2023-07-27T08:12:16Z | 28 | 3 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"medical",
"en",
"arxiv:2303.14070",
"license:cc",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-07-27T07:18:38Z | ---
license: cc
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- medical
inference: false
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# medalpaca-13B-GGML
This is GGML format quantised 4-bit, 5-bit and 8-bit GGML models of [Medalpaca 13B](https://huggingface.co/medalpaca/medalpaca-13b).
This repo is the result of quantising to 4-bit, 5-bit and 8-bit GGML for CPU (+CUDA) inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/medalpaca-13B-GPTQ-4bit).
* [4-bit, 5-bit 8-bit GGML models for llama.cpp CPU (+CUDA) inference](https://huggingface.co/TheBloke/medalpaca-13B-GGML).
* [medalpaca's float32 HF format repo for GPU inference and further conversions](https://huggingface.co/medalpaca/medalpaca-13b).
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them.
For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`.
## Provided files
| Name | Quant method | Bits | Size | RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
`medalpaca-13B.ggmlv3.q4_0.bin` | q4_0 | 4bit | 8.14GB | 10.5GB | 4-bit. |
`medalpaca-13B.ggmlv3.q4_1.bin` | q4_1 | 4bit | 8.14GB | 10.5GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
`medalpaca-13B.ggmlv3.q5_0.bin` | q5_0 | 5bit | 8.95GB | 11.0GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
`medalpaca-13B.ggmlv3.q5_1.bin` | q5_1 | 5bit | 9.76GB | 12.25GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference. |
`medalpaca-13B.ggmlv3.q8_0.bin` | q8_0 | 8bit | 14.6GB | 17GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. |
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 8 -m medalpaca-13B.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: write a story about llamas ### Response:"
```
Change `-t 8` to the number of physical CPU cores you have.
## How to run in `text-generation-webui`
GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual.
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: MedAlpaca 13b
## Table of Contents
[Model Description](#model-description)
- [Architecture](#architecture)
- [Training Data](#trainig-data)
[Model Usage](#model-usage)
[Limitations](#limitations)
## Model Description
### Architecture
`medalpaca-13b` is a large language model specifically fine-tuned for medical domain tasks.
It is based on LLaMA (Large Language Model Meta AI) and contains 13 billion parameters.
The primary goal of this model is to improve question-answering and medical dialogue tasks.
### Training Data
The training data for this project was sourced from various resources.
Firstly, we used Anki flashcards to automatically generate questions,
from the front of the cards and anwers from the back of the card.
Secondly, we generated medical question-answer pairs from [Wikidoc](https://www.wikidoc.org/index.php/Main_Page).
We extracted paragraphs with relevant headings, and used Chat-GPT 3.5
to generate questions from the headings and using the corresponding paragraphs
as answers. This dataset is still under development and we believe
that approximately 70% of these question answer pairs are factual correct.
Thirdly, we used StackExchange to extract question-answer pairs, taking the
top-rated question from five categories: Academia, Bioinformatics, Biology,
Fitness, and Health. Additionally, we used a dataset from [ChatDoctor](https://arxiv.org/abs/2303.14070)
consisting of 200,000 question-answer pairs, available at https://github.com/Kent0n-Li/ChatDoctor.
| Source | n items |
|------------------------------|--------|
| ChatDoc large | 200000 |
| wikidoc | 67704 |
| Stackexchange academia | 40865 |
| Anki flashcards | 33955 |
| Stackexchange biology | 27887 |
| Stackexchange fitness | 9833 |
| Stackexchange health | 7721 |
| Wikidoc patient information | 5942 |
| Stackexchange bioinformatics | 5407 |
## Model Usage
To evaluate the performance of the model on a specific dataset, you can use the Hugging Face Transformers library's built-in evaluation scripts. Please refer to the evaluation guide for more information.
Inference
You can use the model for inference tasks like question-answering and medical dialogues using the Hugging Face Transformers library. Here's an example of how to use the model for a question-answering task:
```python
from transformers import pipeline
qa_pipeline = pipeline("question-answering", model="medalpaca/medalpaca-7b", tokenizer="medalpaca/medalpaca-7b")
question = "What are the symptoms of diabetes?"
context = "Diabetes is a metabolic disease that causes high blood sugar. The symptoms include increased thirst, frequent urination, and unexplained weight loss."
answer = qa_pipeline({"question": question, "context": context})
print(answer)
```
## Limitations
The model may not perform effectively outside the scope of the medical domain.
The training data primarily targets the knowledge level of medical students,
which may result in limitations when addressing the needs of board-certified physicians.
The model has not been tested in real-world applications, so its efficacy and accuracy are currently unknown.
It should never be used as a substitute for a doctor's opinion and must be treated as a research tool only.
|
omarxadel/wav2vec2-large-xlsr-53-arabic-egyptian | omarxadel | 2023-07-27T08:11:47Z | 91 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"CTC",
"Attention",
"Transformer",
"ar",
"dataset:MGB-3",
"dataset:egyptian-arabic-conversational-speech-corpus",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-12T14:17:43Z | ---
language: "ar"
pipeline_tag: automatic-speech-recognition
tags:
- CTC
- Attention
- pytorch
- Transformer
license: "cc-by-nc-4.0"
datasets:
- MGB-3
- egyptian-arabic-conversational-speech-corpus
metrics:
- wer
model-index:
- name: omarxadel/hubert-large-arabic-egyptian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
metrics:
- name: Test WER
type: wer
value: 29.3755
- name: Validation WER
type: wer
value: 29.1828
---
# Wav2Vec2-XLSR-53 - with CTC fine-tuned on MGB-3 and Egyptian Arabic Conversational Speech Corpus (No LM)
This model is a fine-tuned version of [Wav2Vec2-XLSR-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53). We finetuned this model on the MGB-3 and Egyptian Arabic Conversational Speech Corpus datasets, acheiving WER of `29.3755%`.
The performance of the model on the datasets is the following:
| Valid WER | Test WER |
|:---------:|:--------:|
| 29.18 | 29.37 |
# Acknowledgement
Model fine-tuning and data processing for this work were performed as a part of a Graduation Project from Faculty of Engineering, Alexandria University, CCE Program. |
Naruke/a2c-AntBulletEnv-v0 | Naruke | 2023-07-27T07:52:36Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-27T07:51:30Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1388.27 +/- 220.89
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Dungmtk/GenerAd-AI | Dungmtk | 2023-07-27T07:50:21Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-27T07:50:19Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
Mimokrokodil/Dyoma | Mimokrokodil | 2023-07-27T07:47:12Z | 0 | 0 | null | [
"region:us"
]
| null | 2023-07-27T07:30:51Z | ---
language:
- en
tags:
- DMdyoma, lora, Stable Diffusion
# Маскот сети детских магазинов "Детский мир" медведь по имени Дёма
info
https://disk.yandex.ru/d/yPub8MjFrLCI_g |
andbue/byt5-base-latin-normalize | andbue | 2023-07-27T07:42:18Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"la",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-12-21T16:36:56Z | ---
language: la
tag: text2text-generation
pipeline_tag: text2text-generation
widget:
- text: "normal: Finis uero filosophie speculatiue non est nisi perfeccio anime ."
inference:
parameters:
max_length: 1024
license: cc-by-sa-4.0
---
This model was trained to translate Latin sentences from a medieval orthography to a more classical one.
Prefix for normalization is "normal: ". More details will follow soon.
|
s3nh/mamba-gpt-3b-v2-GGML | s3nh | 2023-07-27T07:29:36Z | 0 | 3 | null | [
"text-generation-inference",
"text-generation",
"en",
"license:cc-by-sa-4.0",
"region:us"
]
| text-generation | 2023-07-27T07:24:35Z | ---
license: cc-by-sa-4.0
language:
- en
tags:
- text-generation-inference
pipeline_tag: text-generation
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/CobraMamba/mamba-gpt-3b-v2).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card
## Summary
We have fine-tuned the open-lama model and surpassed the original model in multiple evaluation subtasks, making it currently the best performing 3B model with comparable performance to llama-7b
- Base model: [openlm-research/open_llama_3b_v2](https://huggingface.co/openlm-research/open_llama_3b_v2)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.29.2
pip install accelerate==0.19.0
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="CobraMamba/mamba-gpt-3b",
torch_dtype="auto",
trust_remote_code=True,
use_fast=False,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?</s><|answer|>
```
Alternatively, you can download the mamba_gpt_pipeline.py, store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
import torch
from mamba_gpt_pipeline.py import MambaGPTTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"CobraMamba/mamba-gpt-3b",
use_fast=False,
padding_side="left",
trust_remote_code=False,
)
model = AutoModelForCausalLM.from_pretrained(
"CobraMamba/mamba-gpt-3b",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=False,
)
generate_text = MambaGPTTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "CobraMamba/mamba-gpt-3b" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?</s><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=False,
trust_remote_code=False,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=False,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=4096, bias=False)
(v_proj): Linear(in_features=4096, out_features=4096, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=4096, out_features=11008, bias=False)
(down_proj): Linear(in_features=11008, out_features=4096, bias=False)
(up_proj): Linear(in_features=4096, out_features=11008, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
```
## Evaluation
We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/).
The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.
| **Task/Metric** | finetuned-GPT 3B | OpenLLaMA 3B |
| ---------------------- | -------- | ------------ |
| anli_r1/acc | **0.35** | 0.33 |
| anli_r2/acc | **0.33** | 0.32 |
| anli_r3/acc | 0.35 | 0.35 |
| arc_challenge/acc | **0.35** | 0.34 |
| arc_challenge/acc_norm | 0.37 | 0.37 |
| arc_easy/acc | **0.71** | 0.69 |
| arc_easy/acc_norm | 0.65 | 0.65 |
| boolq/acc | **0.72** | 0.66 |
| hellaswag/acc | **0.49** | 0.43 |
| hellaswag/acc_norm | 0.66 | **0.67** |
| openbookqa/acc | 0.26 | **0.27** |
| openbookqa/acc_norm | 0.40 | 0.40 |
| piqa/acc | **0.76** | 0.75 |
| piqa/acc_norm | 0.76 | 0.76 |
| record/em | 0.88 | 0.88 |
| record/f1 | 0.88 | **0.89** |
| rte/acc | 0.55 | **0.58** |
| truthfulqa_mc/mc1 | **0.27** | 0.22 |
| truthfulqa_mc/mc2 | **0.37** | 0.35 |
| wic/acc | **0.49** | 0.48 |
| winogrande/acc | **0.63** | 0.62 |
| Average | **0.53** | 0.52 |
We removed the task CB and WSC from our benchmark, as our model performs suspiciously well on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
Devops-hestabit/OtherHalf-pt | Devops-hestabit | 2023-07-27T07:20:40Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"endpoints_compatible",
"region:us"
]
| null | 2023-05-15T10:25:29Z | ---
license: creativeml-openrail-m
---
|
dhiruHF/falcon7b-FT-DocQA-v4 | dhiruHF | 2023-07-27T07:13:40Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-27T07:13:38Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
asenella/ms_MVTCAE_beta_10_scale_True_seed_3 | asenella | 2023-07-27T07:06:40Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
]
| null | 2023-07-27T07:06:38Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
FinchResearch/llama2-archimedes-13b-lora | FinchResearch | 2023-07-27T07:06:33Z | 6 | 0 | peft | [
"peft",
"question-answering",
"en",
"dataset:timdettmers/openassistant-guanaco",
"dataset:tatsu-lab/alpaca",
"dataset:BI55/MedText",
"license:mit",
"region:us"
]
| question-answering | 2023-07-26T11:31:01Z | ---
library_name: peft
license: mit
datasets:
- timdettmers/openassistant-guanaco
- tatsu-lab/alpaca
- BI55/MedText
language:
- en
pipeline_tag: question-answering
---
Here is a README.md explaining how to run the Archimedes model locally:
# Archimedes Model
This README provides instructions for running the Archimedes conversational AI assistant locally.
## Requirements
- Python 3.6+
- [Transformers](https://huggingface.co/docs/transformers/installation)
- [Peft](https://github.com/hazyresearch/peft)
- PyTorch
- Access to the LLAMA 2 model files or a cloned public model
Install requirements:
```
!pip install transformers
!pip install peft
!pip install torch
!pip install datasets
!pip install bitsandbytes
```
## Usage
```python
import transformers
from peft import LoraConfig, get_peft_model
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
login() # Need access to the gated model.
# Load LLAMA 2 model
model_name = "meta-llama/Llama-2-13b-chat-hf"
# Quantization configuration
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
)
# Load model
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
trust_remote_code=True
)
# Load LoRA configuration
lora_config = LoraConfig.from_pretrained('harpyerr/archimedes-300s-7b-chat')
model = get_peft_model(model, lora_config)
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
# Define prompt
text = "Can you tell me who made Space-X?"
prompt = "You are a helpful assistant. Please provide an informative response. \n\n" + text
# Generate response
device = "cuda:0"
inputs = tokenizer(prompt, return_tensors="pt").to(device)
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
This loads the LLAMA 2 model, applies 4-bit quantization and LoRA optimizations, constructs a prompt, and generates a response.
See the [docs](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForCausalLM) for more details.
|
FinchResearch/llama2-stable-7b-lora | FinchResearch | 2023-07-27T07:05:34Z | 5 | 3 | peft | [
"peft",
"question-answering",
"en",
"dataset:timdettmers/openassistant-guanaco",
"dataset:tatsu-lab/alpaca",
"dataset:BI55/MedText",
"license:mit",
"region:us"
]
| question-answering | 2023-07-26T00:10:38Z | ---
library_name: peft
license: mit
datasets:
- timdettmers/openassistant-guanaco
- tatsu-lab/alpaca
- BI55/MedText
language:
- en
pipeline_tag: question-answering
---
Here is a README.md explaining how to run the Archimedes model locally:
# Archimedes Model
This README provides instructions for running the Archimedes conversational AI assistant locally.
## Requirements
- Python 3.6+
- [Transformers](https://huggingface.co/docs/transformers/installation)
- [Peft](https://github.com/hazyresearch/peft)
- PyTorch
- Access to the LLAMA 2 model files or a cloned public model
Install requirements:
```
!pip install transformers
!pip install peft
!pip install torch
!pip install datasets
!pip install bitsandbytes
```
## Usage
```python
import transformers
from peft import LoraConfig, get_peft_model
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
login() # Need access to the gated model.
# Load LLAMA 2 model
model_name = "meta-llama/Llama-2-7b-chat-hf"
# Quantization configuration
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
)
# Load model
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
trust_remote_code=True
)
# Load LoRA configuration
lora_config = LoraConfig.from_pretrained('harpyerr/archimedes-300s-7b-chat')
model = get_peft_model(model, lora_config)
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
# Define prompt
text = "Can you tell me who made Space-X?"
prompt = "You are a helpful assistant. Please provide an informative response. \n\n" + text
# Generate response
device = "cuda:0"
inputs = tokenizer(prompt, return_tensors="pt").to(device)
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
This loads the LLAMA 2 model, applies 4-bit quantization and LoRA optimizations, constructs a prompt, and generates a response.
See the [docs](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForCausalLM) for more details.
|
jaycalma/rare-puppers | jaycalma | 2023-07-27T06:59:59Z | 224 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-07-27T02:25:03Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9701492786407471
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### labrador

#### pomeranian

#### poodle
 |
asenella/ms_MVTCAE_beta_25_scale_False_seed_2 | asenella | 2023-07-27T06:58:08Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
]
| null | 2023-07-27T06:58:07Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/ms_MVTCAE_beta_10_scale_False_seed_3 | asenella | 2023-07-27T06:57:59Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
]
| null | 2023-07-27T06:57:58Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
sumet/speecht5_finetuned_voxpopuli_nl | sumet | 2023-07-27T06:55:05Z | 19 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"nl",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
]
| text-to-audio | 2023-07-13T03:02:35Z | ---
language:
- nl
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: speec T5 NL - Sumet
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speec T5 NL - Sumet
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the Vox Populi NL dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0 | 0.54 | 1000 | nan |
| 0.0 | 1.09 | 2000 | nan |
| 0.0 | 1.63 | 3000 | nan |
| 0.0 | 2.18 | 4000 | nan |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
ld76/wav2vec2-base-finetuned-gtzan-2 | ld76 | 2023-07-27T06:54:49Z | 159 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2023-07-27T02:05:08Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.88
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-gtzan
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7770
- Accuracy: 0.88
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0152 | 1.0 | 112 | 1.9017 | 0.52 |
| 1.6232 | 2.0 | 225 | 1.5400 | 0.53 |
| 1.2989 | 3.0 | 337 | 1.1494 | 0.65 |
| 1.2035 | 4.0 | 450 | 1.1189 | 0.69 |
| 0.6804 | 5.0 | 562 | 0.8873 | 0.69 |
| 0.7305 | 6.0 | 675 | 0.7527 | 0.81 |
| 0.4738 | 7.0 | 787 | 0.6880 | 0.78 |
| 0.2824 | 8.0 | 900 | 0.7893 | 0.73 |
| 0.3863 | 9.0 | 1012 | 0.5786 | 0.85 |
| 0.4061 | 10.0 | 1125 | 0.7070 | 0.81 |
| 0.1302 | 11.0 | 1237 | 0.5829 | 0.88 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
FaizanMunsaf/llama2-qlora-finetunined-french | FaizanMunsaf | 2023-07-27T06:48:00Z | 3 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-27T06:47:49Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
chunwoolee0/marian-finetuned-kde4-en-to-ko | chunwoolee0 | 2023-07-27T06:46:22Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-tc-big-en-ko",
"base_model:finetune:Helsinki-NLP/opus-mt-tc-big-en-ko",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2023-07-27T05:47:54Z | ---
license: cc-by-4.0
base_model: Helsinki-NLP/opus-mt-tc-big-en-ko
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-ko
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-ko
split: train
args: en-ko
metrics:
- name: Bleu
type: bleu
value: 6.0084151979608835
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-ko
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tc-big-en-ko](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-ko) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1884
- Bleu: 6.0084
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
eisneim/cn-clip_vit-b-16 | eisneim | 2023-07-27T06:43:25Z | 0 | 0 | null | [
"onnx",
"clip",
"multi modal",
"zero-shot-classification",
"zh",
"license:apache-2.0",
"region:us"
]
| zero-shot-classification | 2023-07-27T03:58:45Z | ---
license: apache-2.0
language:
- zh
pipeline_tag: zero-shot-classification
tags:
- clip
- multi modal
---
Chinese-CLIP Model Deployment: ONNX
those Onnx file is converted using this [script](https://github.com/OFA-Sys/Chinese-CLIP/blob/master/deployment_En.md)
you will likely to encounter this Error while converting:
```
Exporting the operator 'aten::unflatten' to ONNX opset version 13 is not supported.
```
so I uploaded those converted file for your convenience.
中文CLIP模型 [OFA-Sys/Chinese-CLIP](https://github.com/OFA-Sys/Chinese-CLIP) |
tomoohive/Reinforce-Pixelcopter-PLE-v0 | tomoohive | 2023-07-27T06:32:04Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-27T05:26:20Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 34.20 +/- 25.22
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
xianbin/ppo-SnowballTarget | xianbin | 2023-07-27T06:19:44Z | 16 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-07-27T06:19:39Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: xianbin/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
raicrits/topicChangeDetector_v1 | raicrits | 2023-07-27T06:18:32Z | 33,763 | 0 | transformers | [
"transformers",
"pytorch",
"text-classification",
"it",
"dataset:raicrits/newsTopicChange",
"arxiv:1910.09700",
"license:other",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-21T11:59:40Z | ---
license: other
language:
- it
pipeline_tag: text-classification
widget:
- text: >-
Ripartire la parola d'ordine, al governo chiediamo di accelerare la campagna
sui vaccini e di lavorare a un cronoprogramma delle riaperture. Dobbiamo
dare una prospettiva di rinascita a tutti gli italiani, dall'opposizione
ancora all'attacco del governo, gli italiani sono esausti di fare sacrifici
che non portano a nulla. Sono quattro le persone indagate dalla Procura di
Roma per le minacce via mail al ministro della Salute. Tra ottobre del 2020
e il gennaio del 2021 avrebbero inviato al ministro dei messaggi dal
contenuto gravemente minaccioso. Al ministro la solidarietà di tutto il
mondo politico e a causa della pandemia si assottigliano i redditi delle
famiglie italiane. Aumenta anche la pressione fiscale. Lo rileva l'Istat.
- text: >-
L'Agenzia delle entrate ha dato il via oggi ai primi ordini di pagamento dei
contributi a fondo perduto per lavoratori autonomi e partite IVA previsti
dal decreto Sostegni. E scattata la corsa contro il tempo per far arrivare i
contributi a fondo perduto previsti dal decreto sostegno a favore di aziende
e professionisti. L'Agenzia delle entrate ha iniziato l'invio degli ordini
di pagamento per le richieste giunte entro il 5 Aprile, una prima tranches
che vale quasi due miliardi di euro.
- text: >-
Le terapie intensive hanno superato la soglia del 30% di riempimento. La
lotta al virus e anche lotta alle fake news, prosegue la collaborazione tra
ministero della Salute e Twitter quando si cercano notizie sul Covid del
Social rimanda le pagine del ministero, includendo anche le ultime
informazioni sui vaccini. COVID-19 è stato l'hashtag più twittato a livello
globale nel 2020. La poltrona negata da Erdogan ad Ursula von der Leyen, lo
avete sentito? Fa ancora discutere dentro e fuori dal Parlamento europeo:
Marco Clementi.
- text: >-
I bambini che soffrono di autismo hanno gli stessi diritti di tutti gli
altri bambini sottolinea garante per l'infanzia, occorre dunque fare rete
tra famiglia, scuola, pediatri e servizi sociali. Domani mattina alle 705 su
Rai Uno torna la nostra rubrica di approfondimento 7 giorni. L'anticipazione
nel servizio.
- text: >-
Brutta avventura per il giocatore della Roma, vittima di una rapina in casa
la scorsa notte, e tre uomini armati sono entrati nella sua abitazione
romana e lo hanno costretto ad aprire la cassaforte rubando Rolex e
gioielli. Oltre al calciatore c'era anche la moglie in casa, entrambi
illesi. Parliamo ora di campionato di serie a Il posticipo di domenica vedrà
di fronte l'Inter capolista ed in fuga e il Napoli che al San Paolo cerca
punti. Per un posto in Champions League.
metrics:
- accuracy
- precision
- recall
datasets:
- raicrits/newsTopicChange
---
# Model Card for raicrits/topicChangeDetector_v1
<!-- Provide a quick summary of what the model is/does. -->
This model analyses the input text and provides an answer whether in the text there is a change of topic or not (resp. TOPPICCHANGE, SAMETOPIC).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Alberto Messina ([email protected])
- **Model type:** BERT for Sequence Classification
- **Language(s) (NLP):** Italian
- **License:** TBD
- **Finetuned from model:** https://huggingface.co/xlm-roberta-base
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** N/A
- **Paper [optional]:** N/A
- **Demo [optional]:** N/A
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The model should be used giving as input a short paragraph of text taken from a news programme or article in Italian
about which it is requested to get an answer about whether or not it contains a change of topic.
The model has been trained to detect topic changes without apriori knowledge of possible points of separation (e.g., paragraphs or speaker turns).
For this reason it tends to be sensitive to the amount of text supposed to belong to either of the two subsequent topics, and therefore performs better when
the sought for topic change occurs approximately in the middle of the input. To reduce the impact of this issue, it is suggested to use
the model on a sequence of partially overlapping pieces of text taken from the document to be analysed, and to further process the results sequence
to consolidate a decision.
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
TBA
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
The model should not be used as a general purpose topic change detector, i.e. on text which is not originated from news programme transcription or siilar content.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The training dataset is made up of automatic transcriptions from RAI Italian newscasts, therefore there is an intrinsic bias in the kind
of topics that can be tracked for change.
## How to Get Started with the Model
Use the code below to get started with the model.
TBA
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
TBA
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
TBA
#### Training Hyperparameters
- **Training regime:** Mixed Precision
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
TBA
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
TBA
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
TBA
### Results
TBA
#### Summary
TBA
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 2 NVIDIA A100/40Gb
- **Hours used:** 2
- **Cloud Provider:** Private Infrastructure
- **Carbon Emitted:** 0.22 kg CO2 eq.
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
TBA
## More Information [optional]
The development of this model is partially supported by H2020 Project AI4Media - A European Excellence Centre for Media, Society and Democracy (Grant nr. 951911) - http://ai4media.eu
## Model Card Authors [optional]
Alberto Messina
## Model Card Contact
[email protected] |
Jonathaniu/llama2-breast-cancer-13b-knowledge-epoch-5 | Jonathaniu | 2023-07-27T06:14:45Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-27T06:14:25Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
### Framework versions
- PEFT 0.4.0.dev0
|
lunarlist/tts-thai-last-step | lunarlist | 2023-07-27T06:13:19Z | 7 | 2 | nemo | [
"nemo",
"text-to-speech",
"th",
"dataset:lunarlist/edited_common_voice",
"license:mit",
"region:us"
]
| text-to-speech | 2023-07-18T10:42:57Z | ---
license: mit
datasets:
- lunarlist/edited_common_voice
language:
- th
library_name: nemo
pipeline_tag: text-to-speech
---
This model is a Thai TTS model that use a voice from [Common Voice dataset](https://commonvoice.mozilla.org/) and modify the voice to not to sound like the original.
> pip install nemo_toolkit['tts'] soundfile
```python
from nemo.collections.tts.models import UnivNetModel
from nemo.collections.tts.models import Tacotron2Model
import torch
import soundfile as sf
model = Tacotron2Model.from_pretrained("lunarlist/tts-thai-last-step").to('cpu')
vcoder_model = UnivNetModel.from_pretrained(model_name="tts_en_libritts_univnet")
text='ภาษาไทย ง่าย นิด เดียว'
dict_idx={k:i for i,k in enumerate(model.hparams["cfg"]['labels'])}
parsed2=torch.Tensor([[66]+[dict_idx[i] for i in text if i]+[67]]).int().to("cpu")
spectrogram2 = model.generate_spectrogram(tokens=parsed2)
audio2 = vcoder_model.convert_spectrogram_to_audio(spec=spectrogram2)
# Save the audio to disk in a file called speech.wav
sf.write("speech.wav", audio2.to('cpu').detach().numpy()[0], 22050)
```
Medium: [Text-To-Speech ภาษาไทยด้วย Tacotron2](https://medium.com/@taetiyateachamatavorn/text-to-speech-%E0%B8%A0%E0%B8%B2%E0%B8%A9%E0%B8%B2%E0%B9%84%E0%B8%97%E0%B8%A2%E0%B8%94%E0%B9%89%E0%B8%A7%E0%B8%A2-tacotron2-986417b44edc) |
SimonSun/llama2-7B-qlora-finetunined-french-200-epoque | SimonSun | 2023-07-27T06:04:47Z | 2 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-27T05:45:35Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
quantumaikr/llama-2-70B-guanaco-ko-lora | quantumaikr | 2023-07-27T05:59:04Z | 2 | 1 | peft | [
"peft",
"region:us"
]
| null | 2023-07-27T05:58:43Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
sukiee/qlora-koalpaca-polyglot-5.8b-hotissue_v3 | sukiee | 2023-07-27T05:58:54Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-26T13:15:17Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
rohn132/Q_learning_taxi_v3 | rohn132 | 2023-07-27T05:57:43Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-27T05:54:03Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Q_learning_taxi_v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="rohn132/Q_learning_taxi_v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ketong3906/my_awesome_model_classification_w_adapter | ketong3906 | 2023-07-27T05:56:10Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-27T03:02:03Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_awesome_model_classification_w_adapter
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train[:300]
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model_classification_w_adapter
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0035
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 15 | 0.0120 | 1.0 |
| No log | 2.0 | 30 | 0.0035 | 1.0 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
asenella/ms_JMVAE_beta_25_scale_True_seed_3 | asenella | 2023-07-27T05:55:07Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
]
| null | 2023-07-27T05:55:05Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/ms_JMVAE_beta_10_scale_True_seed_3 | asenella | 2023-07-27T05:53:47Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
]
| null | 2023-07-27T05:53:45Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
liuyt75/t5-small_prefix_tuning_sentences_75agree_10 | liuyt75 | 2023-07-27T05:52:07Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-25T18:11:44Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Subsets and Splits